Test Report: KVM_Linux 19338

                    
                      0eb0b855c9cd12df3081fe3f67aa770440dcda12:2024-07-29:35550
                    
                

Test fail (3/350)

Order failed test Duration
364 TestStartStop/group/no-preload/serial/FirstStart 92.19
374 TestStartStop/group/no-preload/serial/DeployApp 0.61
375 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 59.32
x
+
TestStartStop/group/no-preload/serial/FirstStart (92.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-965778 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-965778 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0-beta.0: exit status 90 (1m31.895872115s)

                                                
                                                
-- stdout --
	* [no-preload-965778] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19338
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19338-179709/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-179709/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "no-preload-965778" primary control-plane node in "no-preload-965778" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:50:33.120648  239535 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:50:33.120774  239535 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:50:33.120785  239535 out.go:304] Setting ErrFile to fd 2...
	I0729 13:50:33.120791  239535 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:50:33.121058  239535 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-179709/.minikube/bin
	I0729 13:50:33.121690  239535 out.go:298] Setting JSON to false
	I0729 13:50:33.122853  239535 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":12784,"bootTime":1722248249,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 13:50:33.122920  239535 start.go:139] virtualization: kvm guest
	I0729 13:50:33.151628  239535 out.go:177] * [no-preload-965778] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 13:50:33.153161  239535 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 13:50:33.153169  239535 notify.go:220] Checking for updates...
	I0729 13:50:33.159139  239535 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 13:50:33.161421  239535 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-179709/kubeconfig
	I0729 13:50:33.162843  239535 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-179709/.minikube
	I0729 13:50:33.164384  239535 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 13:50:33.165903  239535 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 13:50:33.167879  239535 config.go:182] Loaded profile config "bridge-263785": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 13:50:33.168031  239535 config.go:182] Loaded profile config "kubenet-263785": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 13:50:33.168213  239535 config.go:182] Loaded profile config "old-k8s-version-436965": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 13:50:33.168354  239535 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 13:50:33.214393  239535 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 13:50:33.215822  239535 start.go:297] selected driver: kvm2
	I0729 13:50:33.215846  239535 start.go:901] validating driver "kvm2" against <nil>
	I0729 13:50:33.215865  239535 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 13:50:33.216839  239535 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:50:33.216988  239535 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-179709/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 13:50:33.236507  239535 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 13:50:33.236622  239535 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 13:50:33.236908  239535 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 13:50:33.237003  239535 cni.go:84] Creating CNI manager for ""
	I0729 13:50:33.237028  239535 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 13:50:33.237042  239535 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 13:50:33.237127  239535 start.go:340] cluster config:
	{Name:no-preload-965778 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-965778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 13:50:33.237282  239535 iso.go:125] acquiring lock: {Name:mkba981b31daf918fe5bcf2915c3bde7a7b27504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:50:33.239274  239535 out.go:177] * Starting "no-preload-965778" primary control-plane node in "no-preload-965778" cluster
	I0729 13:50:33.240619  239535 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 13:50:33.240813  239535 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/no-preload-965778/config.json ...
	I0729 13:50:33.240856  239535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/no-preload-965778/config.json: {Name:mk9b7984f914593ebd833bc76b9f06141a7b6dcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 13:50:33.240997  239535 cache.go:107] acquiring lock: {Name:mk2383c9e6b7a6915eaef03d7a80b8b4c60fea1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:50:33.241028  239535 cache.go:107] acquiring lock: {Name:mk398334d3d819007fedae8661b9c661c8c44e4e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:50:33.241051  239535 cache.go:107] acquiring lock: {Name:mk99ddba910bfeee9b9e6e7f93864113c560ef1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:50:33.241071  239535 cache.go:107] acquiring lock: {Name:mk171fee556637ae237dd69cac1919cc60198aa5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:50:33.241123  239535 cache.go:107] acquiring lock: {Name:mk97c941efaaf4fabb809b41b1512acca163642c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:50:33.241176  239535 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:50:33.241179  239535 cache.go:107] acquiring lock: {Name:mk1000b8dd13ae6cebec4502d49d7ca75a5271d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:50:33.241216  239535 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 13:50:33.241237  239535 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:50:33.241264  239535 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:50:33.241279  239535 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 13:50:33.241212  239535 start.go:360] acquireMachinesLock for no-preload-965778: {Name:mk5543e9efbb5e2375a199061215cf6ce4b521a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 13:50:33.241001  239535 cache.go:107] acquiring lock: {Name:mke6bfffaf8f1c5e0a479e46c52aea3732686c1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:50:33.241455  239535 start.go:364] duration metric: took 43.85µs to acquireMachinesLock for "no-preload-965778"
	I0729 13:50:33.241000  239535 cache.go:107] acquiring lock: {Name:mkb5b51dcee4dcba6dd5d3a01a45080dfbcd0e0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 13:50:33.241486  239535 start.go:93] Provisioning new machine with config: &{Name:no-preload-965778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-965778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 13:50:33.241587  239535 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 13:50:33.242111  239535 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:50:33.242180  239535 cache.go:115] /home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 13:50:33.242197  239535 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.204009ms
	I0729 13:50:33.242209  239535 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 13:50:33.242111  239535 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:50:33.242956  239535 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 13:50:33.243083  239535 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 13:50:33.243266  239535 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 13:50:33.243402  239535 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 13:50:33.243638  239535 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 13:50:33.243823  239535 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 13:50:33.244119  239535 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 13:50:33.244254  239535 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 13:50:33.244361  239535 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 13:50:33.244406  239535 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:50:33.265402  239535 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43639
	I0729 13:50:33.266828  239535 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:50:33.267584  239535 main.go:141] libmachine: Using API Version  1
	I0729 13:50:33.267610  239535 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:50:33.267990  239535 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:50:33.268303  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetMachineName
	I0729 13:50:33.268489  239535 main.go:141] libmachine: (no-preload-965778) Calling .DriverName
	I0729 13:50:33.268677  239535 start.go:159] libmachine.API.Create for "no-preload-965778" (driver="kvm2")
	I0729 13:50:33.268702  239535 client.go:168] LocalClient.Create starting
	I0729 13:50:33.268737  239535 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19338-179709/.minikube/certs/ca.pem
	I0729 13:50:33.268780  239535 main.go:141] libmachine: Decoding PEM data...
	I0729 13:50:33.268796  239535 main.go:141] libmachine: Parsing certificate...
	I0729 13:50:33.268857  239535 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19338-179709/.minikube/certs/cert.pem
	I0729 13:50:33.268885  239535 main.go:141] libmachine: Decoding PEM data...
	I0729 13:50:33.268898  239535 main.go:141] libmachine: Parsing certificate...
	I0729 13:50:33.268926  239535 main.go:141] libmachine: Running pre-create checks...
	I0729 13:50:33.268938  239535 main.go:141] libmachine: (no-preload-965778) Calling .PreCreateCheck
	I0729 13:50:33.270647  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetConfigRaw
	I0729 13:50:33.271143  239535 main.go:141] libmachine: Creating machine...
	I0729 13:50:33.271179  239535 main.go:141] libmachine: (no-preload-965778) Calling .Create
	I0729 13:50:33.271357  239535 main.go:141] libmachine: (no-preload-965778) Creating KVM machine...
	I0729 13:50:33.272884  239535 main.go:141] libmachine: (no-preload-965778) DBG | found existing default KVM network
	I0729 13:50:33.274282  239535 main.go:141] libmachine: (no-preload-965778) DBG | I0729 13:50:33.274130  239558 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e8:74:98} reservation:<nil>}
	I0729 13:50:33.275557  239535 main.go:141] libmachine: (no-preload-965778) DBG | I0729 13:50:33.275348  239558 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:40:b9:00} reservation:<nil>}
	I0729 13:50:33.276633  239535 main.go:141] libmachine: (no-preload-965778) DBG | I0729 13:50:33.276578  239558 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:3a:65:f6} reservation:<nil>}
	I0729 13:50:33.278051  239535 main.go:141] libmachine: (no-preload-965778) DBG | I0729 13:50:33.277723  239558 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00030bf00}
	I0729 13:50:33.278114  239535 main.go:141] libmachine: (no-preload-965778) DBG | created network xml: 
	I0729 13:50:33.278127  239535 main.go:141] libmachine: (no-preload-965778) DBG | <network>
	I0729 13:50:33.278135  239535 main.go:141] libmachine: (no-preload-965778) DBG |   <name>mk-no-preload-965778</name>
	I0729 13:50:33.278144  239535 main.go:141] libmachine: (no-preload-965778) DBG |   <dns enable='no'/>
	I0729 13:50:33.278150  239535 main.go:141] libmachine: (no-preload-965778) DBG |   
	I0729 13:50:33.278159  239535 main.go:141] libmachine: (no-preload-965778) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0729 13:50:33.278167  239535 main.go:141] libmachine: (no-preload-965778) DBG |     <dhcp>
	I0729 13:50:33.278176  239535 main.go:141] libmachine: (no-preload-965778) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0729 13:50:33.278182  239535 main.go:141] libmachine: (no-preload-965778) DBG |     </dhcp>
	I0729 13:50:33.278190  239535 main.go:141] libmachine: (no-preload-965778) DBG |   </ip>
	I0729 13:50:33.278196  239535 main.go:141] libmachine: (no-preload-965778) DBG |   
	I0729 13:50:33.278204  239535 main.go:141] libmachine: (no-preload-965778) DBG | </network>
	I0729 13:50:33.278210  239535 main.go:141] libmachine: (no-preload-965778) DBG | 
	I0729 13:50:33.284755  239535 main.go:141] libmachine: (no-preload-965778) DBG | trying to create private KVM network mk-no-preload-965778 192.168.72.0/24...
	I0729 13:50:33.392508  239535 main.go:141] libmachine: (no-preload-965778) Setting up store path in /home/jenkins/minikube-integration/19338-179709/.minikube/machines/no-preload-965778 ...
	I0729 13:50:33.392541  239535 main.go:141] libmachine: (no-preload-965778) DBG | private KVM network mk-no-preload-965778 192.168.72.0/24 created
	I0729 13:50:33.392563  239535 main.go:141] libmachine: (no-preload-965778) Building disk image from file:///home/jenkins/minikube-integration/19338-179709/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 13:50:33.392584  239535 main.go:141] libmachine: (no-preload-965778) Downloading /home/jenkins/minikube-integration/19338-179709/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19338-179709/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 13:50:33.392609  239535 main.go:141] libmachine: (no-preload-965778) DBG | I0729 13:50:33.391731  239558 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19338-179709/.minikube
	I0729 13:50:33.398994  239535 cache.go:162] opening:  /home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 13:50:33.407559  239535 cache.go:162] opening:  /home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 13:50:33.412668  239535 cache.go:162] opening:  /home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 13:50:33.418793  239535 cache.go:162] opening:  /home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0729 13:50:33.436413  239535 cache.go:162] opening:  /home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 13:50:33.438349  239535 cache.go:162] opening:  /home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 13:50:33.448720  239535 cache.go:162] opening:  /home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 13:50:33.494211  239535 cache.go:157] /home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0729 13:50:33.494245  239535 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 253.233608ms
	I0729 13:50:33.494260  239535 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0729 13:50:33.690752  239535 main.go:141] libmachine: (no-preload-965778) DBG | I0729 13:50:33.690601  239558 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19338-179709/.minikube/machines/no-preload-965778/id_rsa...
	I0729 13:50:33.897366  239535 main.go:141] libmachine: (no-preload-965778) DBG | I0729 13:50:33.897277  239558 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19338-179709/.minikube/machines/no-preload-965778/no-preload-965778.rawdisk...
	I0729 13:50:33.897402  239535 main.go:141] libmachine: (no-preload-965778) DBG | Writing magic tar header
	I0729 13:50:33.897462  239535 main.go:141] libmachine: (no-preload-965778) DBG | Writing SSH key tar header
	I0729 13:50:33.897478  239535 main.go:141] libmachine: (no-preload-965778) DBG | I0729 13:50:33.897409  239558 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19338-179709/.minikube/machines/no-preload-965778 ...
	I0729 13:50:33.897532  239535 main.go:141] libmachine: (no-preload-965778) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-179709/.minikube/machines/no-preload-965778
	I0729 13:50:33.897557  239535 main.go:141] libmachine: (no-preload-965778) Setting executable bit set on /home/jenkins/minikube-integration/19338-179709/.minikube/machines/no-preload-965778 (perms=drwx------)
	I0729 13:50:33.897577  239535 main.go:141] libmachine: (no-preload-965778) Setting executable bit set on /home/jenkins/minikube-integration/19338-179709/.minikube/machines (perms=drwxr-xr-x)
	I0729 13:50:33.897605  239535 main.go:141] libmachine: (no-preload-965778) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-179709/.minikube/machines
	I0729 13:50:33.897615  239535 main.go:141] libmachine: (no-preload-965778) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-179709/.minikube
	I0729 13:50:33.897625  239535 main.go:141] libmachine: (no-preload-965778) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19338-179709
	I0729 13:50:33.897630  239535 main.go:141] libmachine: (no-preload-965778) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 13:50:33.897645  239535 main.go:141] libmachine: (no-preload-965778) DBG | Checking permissions on dir: /home/jenkins
	I0729 13:50:33.897655  239535 main.go:141] libmachine: (no-preload-965778) Setting executable bit set on /home/jenkins/minikube-integration/19338-179709/.minikube (perms=drwxr-xr-x)
	I0729 13:50:33.897664  239535 main.go:141] libmachine: (no-preload-965778) DBG | Checking permissions on dir: /home
	I0729 13:50:33.897681  239535 main.go:141] libmachine: (no-preload-965778) DBG | Skipping /home - not owner
	I0729 13:50:33.897740  239535 main.go:141] libmachine: (no-preload-965778) Setting executable bit set on /home/jenkins/minikube-integration/19338-179709 (perms=drwxrwxr-x)
	I0729 13:50:33.897773  239535 main.go:141] libmachine: (no-preload-965778) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 13:50:33.897859  239535 main.go:141] libmachine: (no-preload-965778) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 13:50:33.897908  239535 main.go:141] libmachine: (no-preload-965778) Creating domain...
	I0729 13:50:33.899145  239535 main.go:141] libmachine: (no-preload-965778) define libvirt domain using xml: 
	I0729 13:50:33.899182  239535 main.go:141] libmachine: (no-preload-965778) <domain type='kvm'>
	I0729 13:50:33.899228  239535 main.go:141] libmachine: (no-preload-965778)   <name>no-preload-965778</name>
	I0729 13:50:33.899252  239535 main.go:141] libmachine: (no-preload-965778)   <memory unit='MiB'>2200</memory>
	I0729 13:50:33.899264  239535 main.go:141] libmachine: (no-preload-965778)   <vcpu>2</vcpu>
	I0729 13:50:33.899275  239535 main.go:141] libmachine: (no-preload-965778)   <features>
	I0729 13:50:33.899284  239535 main.go:141] libmachine: (no-preload-965778)     <acpi/>
	I0729 13:50:33.899291  239535 main.go:141] libmachine: (no-preload-965778)     <apic/>
	I0729 13:50:33.899302  239535 main.go:141] libmachine: (no-preload-965778)     <pae/>
	I0729 13:50:33.899312  239535 main.go:141] libmachine: (no-preload-965778)     
	I0729 13:50:33.899321  239535 main.go:141] libmachine: (no-preload-965778)   </features>
	I0729 13:50:33.899333  239535 main.go:141] libmachine: (no-preload-965778)   <cpu mode='host-passthrough'>
	I0729 13:50:33.899344  239535 main.go:141] libmachine: (no-preload-965778)   
	I0729 13:50:33.899356  239535 main.go:141] libmachine: (no-preload-965778)   </cpu>
	I0729 13:50:33.899365  239535 main.go:141] libmachine: (no-preload-965778)   <os>
	I0729 13:50:33.899372  239535 main.go:141] libmachine: (no-preload-965778)     <type>hvm</type>
	I0729 13:50:33.899380  239535 main.go:141] libmachine: (no-preload-965778)     <boot dev='cdrom'/>
	I0729 13:50:33.899396  239535 main.go:141] libmachine: (no-preload-965778)     <boot dev='hd'/>
	I0729 13:50:33.899410  239535 main.go:141] libmachine: (no-preload-965778)     <bootmenu enable='no'/>
	I0729 13:50:33.899417  239535 main.go:141] libmachine: (no-preload-965778)   </os>
	I0729 13:50:33.899426  239535 main.go:141] libmachine: (no-preload-965778)   <devices>
	I0729 13:50:33.899435  239535 main.go:141] libmachine: (no-preload-965778)     <disk type='file' device='cdrom'>
	I0729 13:50:33.899450  239535 main.go:141] libmachine: (no-preload-965778)       <source file='/home/jenkins/minikube-integration/19338-179709/.minikube/machines/no-preload-965778/boot2docker.iso'/>
	I0729 13:50:33.899466  239535 main.go:141] libmachine: (no-preload-965778)       <target dev='hdc' bus='scsi'/>
	I0729 13:50:33.899484  239535 main.go:141] libmachine: (no-preload-965778)       <readonly/>
	I0729 13:50:33.899503  239535 main.go:141] libmachine: (no-preload-965778)     </disk>
	I0729 13:50:33.899514  239535 main.go:141] libmachine: (no-preload-965778)     <disk type='file' device='disk'>
	I0729 13:50:33.899523  239535 main.go:141] libmachine: (no-preload-965778)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 13:50:33.899536  239535 main.go:141] libmachine: (no-preload-965778)       <source file='/home/jenkins/minikube-integration/19338-179709/.minikube/machines/no-preload-965778/no-preload-965778.rawdisk'/>
	I0729 13:50:33.899548  239535 main.go:141] libmachine: (no-preload-965778)       <target dev='hda' bus='virtio'/>
	I0729 13:50:33.899556  239535 main.go:141] libmachine: (no-preload-965778)     </disk>
	I0729 13:50:33.899564  239535 main.go:141] libmachine: (no-preload-965778)     <interface type='network'>
	I0729 13:50:33.899570  239535 main.go:141] libmachine: (no-preload-965778)       <source network='mk-no-preload-965778'/>
	I0729 13:50:33.899577  239535 main.go:141] libmachine: (no-preload-965778)       <model type='virtio'/>
	I0729 13:50:33.899582  239535 main.go:141] libmachine: (no-preload-965778)     </interface>
	I0729 13:50:33.899590  239535 main.go:141] libmachine: (no-preload-965778)     <interface type='network'>
	I0729 13:50:33.899595  239535 main.go:141] libmachine: (no-preload-965778)       <source network='default'/>
	I0729 13:50:33.899600  239535 main.go:141] libmachine: (no-preload-965778)       <model type='virtio'/>
	I0729 13:50:33.899605  239535 main.go:141] libmachine: (no-preload-965778)     </interface>
	I0729 13:50:33.899613  239535 main.go:141] libmachine: (no-preload-965778)     <serial type='pty'>
	I0729 13:50:33.899618  239535 main.go:141] libmachine: (no-preload-965778)       <target port='0'/>
	I0729 13:50:33.899625  239535 main.go:141] libmachine: (no-preload-965778)     </serial>
	I0729 13:50:33.899630  239535 main.go:141] libmachine: (no-preload-965778)     <console type='pty'>
	I0729 13:50:33.899637  239535 main.go:141] libmachine: (no-preload-965778)       <target type='serial' port='0'/>
	I0729 13:50:33.899642  239535 main.go:141] libmachine: (no-preload-965778)     </console>
	I0729 13:50:33.899649  239535 main.go:141] libmachine: (no-preload-965778)     <rng model='virtio'>
	I0729 13:50:33.899655  239535 main.go:141] libmachine: (no-preload-965778)       <backend model='random'>/dev/random</backend>
	I0729 13:50:33.899662  239535 main.go:141] libmachine: (no-preload-965778)     </rng>
	I0729 13:50:33.899667  239535 main.go:141] libmachine: (no-preload-965778)     
	I0729 13:50:33.899673  239535 main.go:141] libmachine: (no-preload-965778)     
	I0729 13:50:33.899678  239535 main.go:141] libmachine: (no-preload-965778)   </devices>
	I0729 13:50:33.899687  239535 main.go:141] libmachine: (no-preload-965778) </domain>
	I0729 13:50:33.899693  239535 main.go:141] libmachine: (no-preload-965778) 
	I0729 13:50:33.904481  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:ba:04:b6 in network default
	I0729 13:50:33.905746  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:33.905768  239535 main.go:141] libmachine: (no-preload-965778) Ensuring networks are active...
	I0729 13:50:33.906643  239535 main.go:141] libmachine: (no-preload-965778) Ensuring network default is active
	I0729 13:50:33.908386  239535 main.go:141] libmachine: (no-preload-965778) Ensuring network mk-no-preload-965778 is active
	I0729 13:50:33.908753  239535 main.go:141] libmachine: (no-preload-965778) Getting domain xml...
	I0729 13:50:33.910087  239535 main.go:141] libmachine: (no-preload-965778) Creating domain...
	I0729 13:50:33.958566  239535 cache.go:157] /home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0729 13:50:33.958599  239535 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 717.422316ms
	I0729 13:50:33.958623  239535 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0729 13:50:34.593730  239535 cache.go:157] /home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0729 13:50:34.593758  239535 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 1.352638897s
	I0729 13:50:34.593788  239535 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0729 13:50:34.830980  239535 cache.go:157] /home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0729 13:50:34.831024  239535 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 1.589973786s
	I0729 13:50:34.831040  239535 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0729 13:50:34.923079  239535 cache.go:157] /home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0729 13:50:34.923106  239535 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 1.682158826s
	I0729 13:50:34.923120  239535 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0729 13:50:35.285735  239535 cache.go:157] /home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0729 13:50:35.285800  239535 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 2.044836045s
	I0729 13:50:35.285824  239535 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0729 13:50:35.579198  239535 main.go:141] libmachine: (no-preload-965778) Waiting to get IP...
	I0729 13:50:35.580268  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:35.580852  239535 main.go:141] libmachine: (no-preload-965778) DBG | unable to find current IP address of domain no-preload-965778 in network mk-no-preload-965778
	I0729 13:50:35.580879  239535 main.go:141] libmachine: (no-preload-965778) DBG | I0729 13:50:35.580839  239558 retry.go:31] will retry after 206.95331ms: waiting for machine to come up
	I0729 13:50:35.789450  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:35.789482  239535 cache.go:157] /home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 exists
	I0729 13:50:35.789508  239535 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0" took 2.548460766s
	I0729 13:50:35.789531  239535 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /home/jenkins/minikube-integration/19338-179709/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0729 13:50:35.789555  239535 cache.go:87] Successfully saved all images to host disk.
	I0729 13:50:35.790122  239535 main.go:141] libmachine: (no-preload-965778) DBG | unable to find current IP address of domain no-preload-965778 in network mk-no-preload-965778
	I0729 13:50:35.790149  239535 main.go:141] libmachine: (no-preload-965778) DBG | I0729 13:50:35.790066  239558 retry.go:31] will retry after 288.621473ms: waiting for machine to come up
	I0729 13:50:36.080680  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:36.081392  239535 main.go:141] libmachine: (no-preload-965778) DBG | unable to find current IP address of domain no-preload-965778 in network mk-no-preload-965778
	I0729 13:50:36.081425  239535 main.go:141] libmachine: (no-preload-965778) DBG | I0729 13:50:36.081342  239558 retry.go:31] will retry after 436.689219ms: waiting for machine to come up
	I0729 13:50:36.520103  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:36.520712  239535 main.go:141] libmachine: (no-preload-965778) DBG | unable to find current IP address of domain no-preload-965778 in network mk-no-preload-965778
	I0729 13:50:36.520736  239535 main.go:141] libmachine: (no-preload-965778) DBG | I0729 13:50:36.520662  239558 retry.go:31] will retry after 589.225807ms: waiting for machine to come up
	I0729 13:50:37.111302  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:37.111997  239535 main.go:141] libmachine: (no-preload-965778) DBG | unable to find current IP address of domain no-preload-965778 in network mk-no-preload-965778
	I0729 13:50:37.112034  239535 main.go:141] libmachine: (no-preload-965778) DBG | I0729 13:50:37.111906  239558 retry.go:31] will retry after 542.055563ms: waiting for machine to come up
	I0729 13:50:37.655727  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:37.656332  239535 main.go:141] libmachine: (no-preload-965778) DBG | unable to find current IP address of domain no-preload-965778 in network mk-no-preload-965778
	I0729 13:50:37.656359  239535 main.go:141] libmachine: (no-preload-965778) DBG | I0729 13:50:37.656253  239558 retry.go:31] will retry after 615.026063ms: waiting for machine to come up
	I0729 13:50:38.272573  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:38.273144  239535 main.go:141] libmachine: (no-preload-965778) DBG | unable to find current IP address of domain no-preload-965778 in network mk-no-preload-965778
	I0729 13:50:38.273176  239535 main.go:141] libmachine: (no-preload-965778) DBG | I0729 13:50:38.273054  239558 retry.go:31] will retry after 993.957791ms: waiting for machine to come up
	I0729 13:50:39.268255  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:39.269027  239535 main.go:141] libmachine: (no-preload-965778) DBG | unable to find current IP address of domain no-preload-965778 in network mk-no-preload-965778
	I0729 13:50:39.269056  239535 main.go:141] libmachine: (no-preload-965778) DBG | I0729 13:50:39.268982  239558 retry.go:31] will retry after 1.319638539s: waiting for machine to come up
	I0729 13:50:40.590003  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:40.590635  239535 main.go:141] libmachine: (no-preload-965778) DBG | unable to find current IP address of domain no-preload-965778 in network mk-no-preload-965778
	I0729 13:50:40.590662  239535 main.go:141] libmachine: (no-preload-965778) DBG | I0729 13:50:40.590549  239558 retry.go:31] will retry after 1.203604987s: waiting for machine to come up
	I0729 13:50:41.796149  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:41.796737  239535 main.go:141] libmachine: (no-preload-965778) DBG | unable to find current IP address of domain no-preload-965778 in network mk-no-preload-965778
	I0729 13:50:41.796769  239535 main.go:141] libmachine: (no-preload-965778) DBG | I0729 13:50:41.796683  239558 retry.go:31] will retry after 2.113339091s: waiting for machine to come up
	I0729 13:50:43.917499  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:43.919875  239535 main.go:141] libmachine: (no-preload-965778) DBG | unable to find current IP address of domain no-preload-965778 in network mk-no-preload-965778
	I0729 13:50:43.919901  239535 main.go:141] libmachine: (no-preload-965778) DBG | I0729 13:50:43.919787  239558 retry.go:31] will retry after 2.268854268s: waiting for machine to come up
	I0729 13:50:46.189942  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:46.190484  239535 main.go:141] libmachine: (no-preload-965778) DBG | unable to find current IP address of domain no-preload-965778 in network mk-no-preload-965778
	I0729 13:50:46.190503  239535 main.go:141] libmachine: (no-preload-965778) DBG | I0729 13:50:46.190434  239558 retry.go:31] will retry after 3.437836645s: waiting for machine to come up
	I0729 13:50:49.630067  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:49.630670  239535 main.go:141] libmachine: (no-preload-965778) DBG | unable to find current IP address of domain no-preload-965778 in network mk-no-preload-965778
	I0729 13:50:49.630702  239535 main.go:141] libmachine: (no-preload-965778) DBG | I0729 13:50:49.630612  239558 retry.go:31] will retry after 3.436088486s: waiting for machine to come up
	I0729 13:50:53.070764  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:53.071399  239535 main.go:141] libmachine: (no-preload-965778) DBG | unable to find current IP address of domain no-preload-965778 in network mk-no-preload-965778
	I0729 13:50:53.071450  239535 main.go:141] libmachine: (no-preload-965778) DBG | I0729 13:50:53.071366  239558 retry.go:31] will retry after 5.57695795s: waiting for machine to come up
	I0729 13:50:58.650358  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:58.650949  239535 main.go:141] libmachine: (no-preload-965778) Found IP for machine: 192.168.72.204
	I0729 13:50:58.650975  239535 main.go:141] libmachine: (no-preload-965778) Reserving static IP address...
	I0729 13:50:58.650992  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has current primary IP address 192.168.72.204 and MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:58.651464  239535 main.go:141] libmachine: (no-preload-965778) DBG | unable to find host DHCP lease matching {name: "no-preload-965778", mac: "52:54:00:1b:43:04", ip: "192.168.72.204"} in network mk-no-preload-965778
	I0729 13:50:58.734544  239535 main.go:141] libmachine: (no-preload-965778) DBG | Getting to WaitForSSH function...
	I0729 13:50:58.734572  239535 main.go:141] libmachine: (no-preload-965778) Reserved static IP address: 192.168.72.204
	I0729 13:50:58.734586  239535 main.go:141] libmachine: (no-preload-965778) Waiting for SSH to be available...
	I0729 13:50:58.737499  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:58.737933  239535 main.go:141] libmachine: (no-preload-965778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:43:04", ip: ""} in network mk-no-preload-965778: {Iface:virbr1 ExpiryTime:2024-07-29 14:50:49 +0000 UTC Type:0 Mac:52:54:00:1b:43:04 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1b:43:04}
	I0729 13:50:58.737968  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined IP address 192.168.72.204 and MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:58.738290  239535 main.go:141] libmachine: (no-preload-965778) DBG | Using SSH client type: external
	I0729 13:50:58.738324  239535 main.go:141] libmachine: (no-preload-965778) DBG | Using SSH private key: /home/jenkins/minikube-integration/19338-179709/.minikube/machines/no-preload-965778/id_rsa (-rw-------)
	I0729 13:50:58.738353  239535 main.go:141] libmachine: (no-preload-965778) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.204 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19338-179709/.minikube/machines/no-preload-965778/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 13:50:58.738374  239535 main.go:141] libmachine: (no-preload-965778) DBG | About to run SSH command:
	I0729 13:50:58.738387  239535 main.go:141] libmachine: (no-preload-965778) DBG | exit 0
	I0729 13:50:58.873581  239535 main.go:141] libmachine: (no-preload-965778) DBG | SSH cmd err, output: <nil>: 
	I0729 13:50:58.873881  239535 main.go:141] libmachine: (no-preload-965778) KVM machine creation complete!
	I0729 13:50:58.874213  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetConfigRaw
	I0729 13:50:58.874829  239535 main.go:141] libmachine: (no-preload-965778) Calling .DriverName
	I0729 13:50:58.875042  239535 main.go:141] libmachine: (no-preload-965778) Calling .DriverName
	I0729 13:50:58.875244  239535 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 13:50:58.875261  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetState
	I0729 13:50:58.876874  239535 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 13:50:58.876891  239535 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 13:50:58.876898  239535 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 13:50:58.876907  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHHostname
	I0729 13:50:58.879788  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:58.880153  239535 main.go:141] libmachine: (no-preload-965778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:43:04", ip: ""} in network mk-no-preload-965778: {Iface:virbr1 ExpiryTime:2024-07-29 14:50:49 +0000 UTC Type:0 Mac:52:54:00:1b:43:04 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:no-preload-965778 Clientid:01:52:54:00:1b:43:04}
	I0729 13:50:58.880178  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined IP address 192.168.72.204 and MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:58.880300  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHPort
	I0729 13:50:58.880575  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHKeyPath
	I0729 13:50:58.880782  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHKeyPath
	I0729 13:50:58.880943  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHUsername
	I0729 13:50:58.881127  239535 main.go:141] libmachine: Using SSH client type: native
	I0729 13:50:58.881351  239535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.204 22 <nil> <nil>}
	I0729 13:50:58.881363  239535 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 13:50:58.988069  239535 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:50:58.988095  239535 main.go:141] libmachine: Detecting the provisioner...
	I0729 13:50:58.988105  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHHostname
	I0729 13:50:58.990840  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:58.991202  239535 main.go:141] libmachine: (no-preload-965778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:43:04", ip: ""} in network mk-no-preload-965778: {Iface:virbr1 ExpiryTime:2024-07-29 14:50:49 +0000 UTC Type:0 Mac:52:54:00:1b:43:04 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:no-preload-965778 Clientid:01:52:54:00:1b:43:04}
	I0729 13:50:58.991231  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined IP address 192.168.72.204 and MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:58.991386  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHPort
	I0729 13:50:58.991598  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHKeyPath
	I0729 13:50:58.991825  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHKeyPath
	I0729 13:50:58.991991  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHUsername
	I0729 13:50:58.992144  239535 main.go:141] libmachine: Using SSH client type: native
	I0729 13:50:58.992302  239535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.204 22 <nil> <nil>}
	I0729 13:50:58.992312  239535 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 13:50:59.101295  239535 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 13:50:59.101380  239535 main.go:141] libmachine: found compatible host: buildroot
	I0729 13:50:59.101395  239535 main.go:141] libmachine: Provisioning with buildroot...
	I0729 13:50:59.101405  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetMachineName
	I0729 13:50:59.101695  239535 buildroot.go:166] provisioning hostname "no-preload-965778"
	I0729 13:50:59.101726  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetMachineName
	I0729 13:50:59.101905  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHHostname
	I0729 13:50:59.104618  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:59.105027  239535 main.go:141] libmachine: (no-preload-965778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:43:04", ip: ""} in network mk-no-preload-965778: {Iface:virbr1 ExpiryTime:2024-07-29 14:50:49 +0000 UTC Type:0 Mac:52:54:00:1b:43:04 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:no-preload-965778 Clientid:01:52:54:00:1b:43:04}
	I0729 13:50:59.105056  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined IP address 192.168.72.204 and MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:59.105237  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHPort
	I0729 13:50:59.105434  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHKeyPath
	I0729 13:50:59.105595  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHKeyPath
	I0729 13:50:59.105754  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHUsername
	I0729 13:50:59.105954  239535 main.go:141] libmachine: Using SSH client type: native
	I0729 13:50:59.106120  239535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.204 22 <nil> <nil>}
	I0729 13:50:59.106132  239535 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-965778 && echo "no-preload-965778" | sudo tee /etc/hostname
	I0729 13:50:59.233001  239535 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-965778
	
	I0729 13:50:59.233027  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHHostname
	I0729 13:50:59.235974  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:59.236373  239535 main.go:141] libmachine: (no-preload-965778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:43:04", ip: ""} in network mk-no-preload-965778: {Iface:virbr1 ExpiryTime:2024-07-29 14:50:49 +0000 UTC Type:0 Mac:52:54:00:1b:43:04 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:no-preload-965778 Clientid:01:52:54:00:1b:43:04}
	I0729 13:50:59.236397  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined IP address 192.168.72.204 and MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:59.236561  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHPort
	I0729 13:50:59.236770  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHKeyPath
	I0729 13:50:59.236989  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHKeyPath
	I0729 13:50:59.237142  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHUsername
	I0729 13:50:59.237330  239535 main.go:141] libmachine: Using SSH client type: native
	I0729 13:50:59.237499  239535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.204 22 <nil> <nil>}
	I0729 13:50:59.237515  239535 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-965778' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-965778/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-965778' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 13:50:59.353226  239535 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 13:50:59.353267  239535 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19338-179709/.minikube CaCertPath:/home/jenkins/minikube-integration/19338-179709/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19338-179709/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19338-179709/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19338-179709/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19338-179709/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19338-179709/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19338-179709/.minikube}
	I0729 13:50:59.353336  239535 buildroot.go:174] setting up certificates
	I0729 13:50:59.353352  239535 provision.go:84] configureAuth start
	I0729 13:50:59.353372  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetMachineName
	I0729 13:50:59.353692  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetIP
	I0729 13:50:59.356530  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:59.356943  239535 main.go:141] libmachine: (no-preload-965778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:43:04", ip: ""} in network mk-no-preload-965778: {Iface:virbr1 ExpiryTime:2024-07-29 14:50:49 +0000 UTC Type:0 Mac:52:54:00:1b:43:04 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:no-preload-965778 Clientid:01:52:54:00:1b:43:04}
	I0729 13:50:59.356987  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined IP address 192.168.72.204 and MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:59.357187  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHHostname
	I0729 13:50:59.360029  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:59.360513  239535 main.go:141] libmachine: (no-preload-965778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:43:04", ip: ""} in network mk-no-preload-965778: {Iface:virbr1 ExpiryTime:2024-07-29 14:50:49 +0000 UTC Type:0 Mac:52:54:00:1b:43:04 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:no-preload-965778 Clientid:01:52:54:00:1b:43:04}
	I0729 13:50:59.360541  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined IP address 192.168.72.204 and MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:59.360699  239535 provision.go:143] copyHostCerts
	I0729 13:50:59.360768  239535 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-179709/.minikube/ca.pem, removing ...
	I0729 13:50:59.360783  239535 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-179709/.minikube/ca.pem
	I0729 13:50:59.360871  239535 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-179709/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19338-179709/.minikube/ca.pem (1078 bytes)
	I0729 13:50:59.360999  239535 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-179709/.minikube/cert.pem, removing ...
	I0729 13:50:59.361011  239535 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-179709/.minikube/cert.pem
	I0729 13:50:59.361048  239535 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-179709/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19338-179709/.minikube/cert.pem (1123 bytes)
	I0729 13:50:59.361108  239535 exec_runner.go:144] found /home/jenkins/minikube-integration/19338-179709/.minikube/key.pem, removing ...
	I0729 13:50:59.361115  239535 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19338-179709/.minikube/key.pem
	I0729 13:50:59.361136  239535 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19338-179709/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19338-179709/.minikube/key.pem (1679 bytes)
	I0729 13:50:59.361185  239535 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19338-179709/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19338-179709/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19338-179709/.minikube/certs/ca-key.pem org=jenkins.no-preload-965778 san=[127.0.0.1 192.168.72.204 localhost minikube no-preload-965778]
	I0729 13:50:59.812245  239535 provision.go:177] copyRemoteCerts
	I0729 13:50:59.812338  239535 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 13:50:59.812381  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHHostname
	I0729 13:50:59.816096  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:59.816549  239535 main.go:141] libmachine: (no-preload-965778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:43:04", ip: ""} in network mk-no-preload-965778: {Iface:virbr1 ExpiryTime:2024-07-29 14:50:49 +0000 UTC Type:0 Mac:52:54:00:1b:43:04 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:no-preload-965778 Clientid:01:52:54:00:1b:43:04}
	I0729 13:50:59.816577  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined IP address 192.168.72.204 and MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:59.816760  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHPort
	I0729 13:50:59.817000  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHKeyPath
	I0729 13:50:59.817163  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHUsername
	I0729 13:50:59.817347  239535 sshutil.go:53] new ssh client: &{IP:192.168.72.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-179709/.minikube/machines/no-preload-965778/id_rsa Username:docker}
	I0729 13:50:59.899636  239535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-179709/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 13:50:59.926956  239535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-179709/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 13:50:59.955486  239535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-179709/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 13:50:59.978729  239535 provision.go:87] duration metric: took 625.360909ms to configureAuth
	I0729 13:50:59.978758  239535 buildroot.go:189] setting minikube options for container-runtime
	I0729 13:50:59.978934  239535 config.go:182] Loaded profile config "no-preload-965778": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 13:50:59.978960  239535 main.go:141] libmachine: (no-preload-965778) Calling .DriverName
	I0729 13:50:59.979198  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHHostname
	I0729 13:50:59.981797  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:59.982201  239535 main.go:141] libmachine: (no-preload-965778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:43:04", ip: ""} in network mk-no-preload-965778: {Iface:virbr1 ExpiryTime:2024-07-29 14:50:49 +0000 UTC Type:0 Mac:52:54:00:1b:43:04 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:no-preload-965778 Clientid:01:52:54:00:1b:43:04}
	I0729 13:50:59.982240  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined IP address 192.168.72.204 and MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:50:59.982412  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHPort
	I0729 13:50:59.982604  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHKeyPath
	I0729 13:50:59.982799  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHKeyPath
	I0729 13:50:59.982963  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHUsername
	I0729 13:50:59.983160  239535 main.go:141] libmachine: Using SSH client type: native
	I0729 13:50:59.983336  239535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.204 22 <nil> <nil>}
	I0729 13:50:59.983348  239535 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 13:51:00.094340  239535 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 13:51:00.094367  239535 buildroot.go:70] root file system type: tmpfs
	I0729 13:51:00.094502  239535 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 13:51:00.094527  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHHostname
	I0729 13:51:00.097741  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:51:00.098190  239535 main.go:141] libmachine: (no-preload-965778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:43:04", ip: ""} in network mk-no-preload-965778: {Iface:virbr1 ExpiryTime:2024-07-29 14:50:49 +0000 UTC Type:0 Mac:52:54:00:1b:43:04 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:no-preload-965778 Clientid:01:52:54:00:1b:43:04}
	I0729 13:51:00.098220  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined IP address 192.168.72.204 and MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:51:00.098360  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHPort
	I0729 13:51:00.098601  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHKeyPath
	I0729 13:51:00.098795  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHKeyPath
	I0729 13:51:00.098944  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHUsername
	I0729 13:51:00.099102  239535 main.go:141] libmachine: Using SSH client type: native
	I0729 13:51:00.099293  239535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.204 22 <nil> <nil>}
	I0729 13:51:00.099356  239535 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 13:51:00.234013  239535 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 13:51:00.234075  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHHostname
	I0729 13:51:00.237433  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:51:00.237830  239535 main.go:141] libmachine: (no-preload-965778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:43:04", ip: ""} in network mk-no-preload-965778: {Iface:virbr1 ExpiryTime:2024-07-29 14:50:49 +0000 UTC Type:0 Mac:52:54:00:1b:43:04 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:no-preload-965778 Clientid:01:52:54:00:1b:43:04}
	I0729 13:51:00.237856  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined IP address 192.168.72.204 and MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:51:00.238051  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHPort
	I0729 13:51:00.238243  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHKeyPath
	I0729 13:51:00.238442  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHKeyPath
	I0729 13:51:00.238569  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHUsername
	I0729 13:51:00.238752  239535 main.go:141] libmachine: Using SSH client type: native
	I0729 13:51:00.238967  239535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.204 22 <nil> <nil>}
	I0729 13:51:00.238994  239535 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 13:51:02.080738  239535 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0729 13:51:02.080796  239535 main.go:141] libmachine: Checking connection to Docker...
	I0729 13:51:02.080812  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetURL
	I0729 13:51:02.082214  239535 main.go:141] libmachine: (no-preload-965778) DBG | Using libvirt version 6000000
	I0729 13:51:02.084852  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:51:02.085251  239535 main.go:141] libmachine: (no-preload-965778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:43:04", ip: ""} in network mk-no-preload-965778: {Iface:virbr1 ExpiryTime:2024-07-29 14:50:49 +0000 UTC Type:0 Mac:52:54:00:1b:43:04 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:no-preload-965778 Clientid:01:52:54:00:1b:43:04}
	I0729 13:51:02.085282  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined IP address 192.168.72.204 and MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:51:02.085467  239535 main.go:141] libmachine: Docker is up and running!
	I0729 13:51:02.085487  239535 main.go:141] libmachine: Reticulating splines...
	I0729 13:51:02.085497  239535 client.go:171] duration metric: took 28.816786221s to LocalClient.Create
	I0729 13:51:02.085526  239535 start.go:167] duration metric: took 28.816849347s to libmachine.API.Create "no-preload-965778"
	I0729 13:51:02.085538  239535 start.go:293] postStartSetup for "no-preload-965778" (driver="kvm2")
	I0729 13:51:02.085555  239535 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 13:51:02.085578  239535 main.go:141] libmachine: (no-preload-965778) Calling .DriverName
	I0729 13:51:02.085846  239535 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 13:51:02.085874  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHHostname
	I0729 13:51:02.087743  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:51:02.088101  239535 main.go:141] libmachine: (no-preload-965778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:43:04", ip: ""} in network mk-no-preload-965778: {Iface:virbr1 ExpiryTime:2024-07-29 14:50:49 +0000 UTC Type:0 Mac:52:54:00:1b:43:04 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:no-preload-965778 Clientid:01:52:54:00:1b:43:04}
	I0729 13:51:02.088121  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined IP address 192.168.72.204 and MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:51:02.088276  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHPort
	I0729 13:51:02.088493  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHKeyPath
	I0729 13:51:02.088666  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHUsername
	I0729 13:51:02.088841  239535 sshutil.go:53] new ssh client: &{IP:192.168.72.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-179709/.minikube/machines/no-preload-965778/id_rsa Username:docker}
	I0729 13:51:02.175046  239535 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 13:51:02.179954  239535 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 13:51:02.180022  239535 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-179709/.minikube/addons for local assets ...
	I0729 13:51:02.180098  239535 filesync.go:126] Scanning /home/jenkins/minikube-integration/19338-179709/.minikube/files for local assets ...
	I0729 13:51:02.180196  239535 filesync.go:149] local asset: /home/jenkins/minikube-integration/19338-179709/.minikube/files/etc/ssl/certs/1869512.pem -> 1869512.pem in /etc/ssl/certs
	I0729 13:51:02.180305  239535 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 13:51:02.189514  239535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19338-179709/.minikube/files/etc/ssl/certs/1869512.pem --> /etc/ssl/certs/1869512.pem (1708 bytes)
	I0729 13:51:02.212462  239535 start.go:296] duration metric: took 126.905287ms for postStartSetup
	I0729 13:51:02.212515  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetConfigRaw
	I0729 13:51:02.213180  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetIP
	I0729 13:51:02.216086  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:51:02.216575  239535 main.go:141] libmachine: (no-preload-965778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:43:04", ip: ""} in network mk-no-preload-965778: {Iface:virbr1 ExpiryTime:2024-07-29 14:50:49 +0000 UTC Type:0 Mac:52:54:00:1b:43:04 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:no-preload-965778 Clientid:01:52:54:00:1b:43:04}
	I0729 13:51:02.216614  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined IP address 192.168.72.204 and MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:51:02.216843  239535 profile.go:143] Saving config to /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/no-preload-965778/config.json ...
	I0729 13:51:02.217060  239535 start.go:128] duration metric: took 28.975460758s to createHost
	I0729 13:51:02.217095  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHHostname
	I0729 13:51:02.219753  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:51:02.220050  239535 main.go:141] libmachine: (no-preload-965778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:43:04", ip: ""} in network mk-no-preload-965778: {Iface:virbr1 ExpiryTime:2024-07-29 14:50:49 +0000 UTC Type:0 Mac:52:54:00:1b:43:04 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:no-preload-965778 Clientid:01:52:54:00:1b:43:04}
	I0729 13:51:02.220090  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined IP address 192.168.72.204 and MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:51:02.220217  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHPort
	I0729 13:51:02.220412  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHKeyPath
	I0729 13:51:02.220571  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHKeyPath
	I0729 13:51:02.220734  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHUsername
	I0729 13:51:02.220912  239535 main.go:141] libmachine: Using SSH client type: native
	I0729 13:51:02.221107  239535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.204 22 <nil> <nil>}
	I0729 13:51:02.221119  239535 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 13:51:02.334415  239535 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722261062.285416261
	
	I0729 13:51:02.334441  239535 fix.go:216] guest clock: 1722261062.285416261
	I0729 13:51:02.334451  239535 fix.go:229] Guest: 2024-07-29 13:51:02.285416261 +0000 UTC Remote: 2024-07-29 13:51:02.217076962 +0000 UTC m=+29.138140018 (delta=68.339299ms)
	I0729 13:51:02.334489  239535 fix.go:200] guest clock delta is within tolerance: 68.339299ms
	I0729 13:51:02.334497  239535 start.go:83] releasing machines lock for "no-preload-965778", held for 29.093026118s
	I0729 13:51:02.334526  239535 main.go:141] libmachine: (no-preload-965778) Calling .DriverName
	I0729 13:51:02.334813  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetIP
	I0729 13:51:02.337911  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:51:02.338334  239535 main.go:141] libmachine: (no-preload-965778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:43:04", ip: ""} in network mk-no-preload-965778: {Iface:virbr1 ExpiryTime:2024-07-29 14:50:49 +0000 UTC Type:0 Mac:52:54:00:1b:43:04 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:no-preload-965778 Clientid:01:52:54:00:1b:43:04}
	I0729 13:51:02.338361  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined IP address 192.168.72.204 and MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:51:02.338545  239535 main.go:141] libmachine: (no-preload-965778) Calling .DriverName
	I0729 13:51:02.339114  239535 main.go:141] libmachine: (no-preload-965778) Calling .DriverName
	I0729 13:51:02.339336  239535 main.go:141] libmachine: (no-preload-965778) Calling .DriverName
	I0729 13:51:02.339434  239535 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 13:51:02.339472  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHHostname
	I0729 13:51:02.339563  239535 ssh_runner.go:195] Run: cat /version.json
	I0729 13:51:02.339579  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHHostname
	I0729 13:51:02.344145  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:51:02.344359  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:51:02.344739  239535 main.go:141] libmachine: (no-preload-965778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:43:04", ip: ""} in network mk-no-preload-965778: {Iface:virbr1 ExpiryTime:2024-07-29 14:50:49 +0000 UTC Type:0 Mac:52:54:00:1b:43:04 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:no-preload-965778 Clientid:01:52:54:00:1b:43:04}
	I0729 13:51:02.344789  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined IP address 192.168.72.204 and MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:51:02.344646  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHPort
	I0729 13:51:02.345012  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHPort
	I0729 13:51:02.345054  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHKeyPath
	I0729 13:51:02.345088  239535 main.go:141] libmachine: (no-preload-965778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:43:04", ip: ""} in network mk-no-preload-965778: {Iface:virbr1 ExpiryTime:2024-07-29 14:50:49 +0000 UTC Type:0 Mac:52:54:00:1b:43:04 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:no-preload-965778 Clientid:01:52:54:00:1b:43:04}
	I0729 13:51:02.345101  239535 main.go:141] libmachine: (no-preload-965778) DBG | domain no-preload-965778 has defined IP address 192.168.72.204 and MAC address 52:54:00:1b:43:04 in network mk-no-preload-965778
	I0729 13:51:02.345220  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHUsername
	I0729 13:51:02.345279  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHKeyPath
	I0729 13:51:02.345474  239535 sshutil.go:53] new ssh client: &{IP:192.168.72.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-179709/.minikube/machines/no-preload-965778/id_rsa Username:docker}
	I0729 13:51:02.345490  239535 main.go:141] libmachine: (no-preload-965778) Calling .GetSSHUsername
	I0729 13:51:02.345659  239535 sshutil.go:53] new ssh client: &{IP:192.168.72.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-179709/.minikube/machines/no-preload-965778/id_rsa Username:docker}
	I0729 13:51:02.452201  239535 ssh_runner.go:195] Run: systemctl --version
	I0729 13:51:02.460308  239535 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 13:51:02.467490  239535 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 13:51:02.467564  239535 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 13:51:02.490939  239535 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 13:51:02.490970  239535 start.go:495] detecting cgroup driver to use...
	I0729 13:51:02.491104  239535 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:51:02.513658  239535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0729 13:51:02.527350  239535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 13:51:02.541237  239535 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 13:51:02.541311  239535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 13:51:02.558737  239535 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 13:51:02.574484  239535 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 13:51:02.590798  239535 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 13:51:02.607617  239535 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 13:51:02.623326  239535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 13:51:02.643331  239535 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 13:51:02.659399  239535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 13:51:02.671087  239535 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 13:51:02.684697  239535 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 13:51:02.699343  239535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:51:02.849789  239535 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 13:51:02.875868  239535 start.go:495] detecting cgroup driver to use...
	I0729 13:51:02.875970  239535 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 13:51:02.897189  239535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:51:02.922557  239535 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 13:51:02.946444  239535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 13:51:02.961791  239535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 13:51:02.977774  239535 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0729 13:51:03.013197  239535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 13:51:03.030115  239535 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 13:51:03.054960  239535 ssh_runner.go:195] Run: which cri-dockerd
	I0729 13:51:03.060543  239535 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 13:51:03.073518  239535 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0729 13:51:03.094246  239535 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 13:51:03.225729  239535 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 13:51:03.384106  239535 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 13:51:03.384260  239535 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 13:51:03.411409  239535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 13:51:03.549884  239535 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 13:52:04.597697  239535 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.047769066s)
	I0729 13:52:04.597782  239535 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0729 13:52:04.746753  239535 out.go:177] 
	W0729 13:52:04.870648  239535 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 29 13:51:00 no-preload-965778 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 13:51:00 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:00.627506302Z" level=info msg="Starting up"
	Jul 29 13:51:00 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:00.629138475Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 13:51:00 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:00.630146803Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=536
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.658171476Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.682441609Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.682592824Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.682682986Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.682723536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.682851910Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.682972952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.683241537Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.683310771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.683358196Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.683393791Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.683505962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.683719400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.685810341Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.685959551Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.686230657Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.686306833Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.686422352Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.686495161Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.699799385Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.700018632Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.701009763Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.701122471Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.701157636Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.701322472Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.701972360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702124211Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702159811Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702178133Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702193728Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702210850Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702230278Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702247043Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702263945Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702279337Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702293898Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702306886Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702331758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702348565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702363887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702379414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702393504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702408714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702420653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702440028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702456838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702473483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702487859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702501206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702514923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702535731Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702558768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702573120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702585579Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702638743Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702659881Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702698953Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702724259Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702747846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702783217Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702810904Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.704332729Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.704607880Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.704831155Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.704976119Z" level=info msg="containerd successfully booted in 0.047789s"
	Jul 29 13:51:01 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:01.670730056Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 13:51:01 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:01.688200493Z" level=info msg="Loading containers: start."
	Jul 29 13:51:01 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:01.800631892Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 13:51:01 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:01.934133681Z" level=info msg="Loading containers: done."
	Jul 29 13:51:01 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:01.950396882Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 13:51:01 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:01.950555211Z" level=info msg="Daemon has completed initialization"
	Jul 29 13:51:02 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:02.028374157Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 13:51:02 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:02.028612453Z" level=info msg="API listen on [::]:2376"
	Jul 29 13:51:02 no-preload-965778 systemd[1]: Started Docker Application Container Engine.
	Jul 29 13:51:03 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:03.523458428Z" level=info msg="Processing signal 'terminated'"
	Jul 29 13:51:03 no-preload-965778 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 13:51:03 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:03.524910780Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 13:51:03 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:03.525491715Z" level=info msg="Daemon shutdown complete"
	Jul 29 13:51:03 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:03.525559617Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 13:51:03 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:03.525626121Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 13:51:04 no-preload-965778 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 13:51:04 no-preload-965778 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 13:51:04 no-preload-965778 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 13:51:04 no-preload-965778 dockerd[844]: time="2024-07-29T13:51:04.571173609Z" level=info msg="Starting up"
	Jul 29 13:52:04 no-preload-965778 dockerd[844]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 29 13:52:04 no-preload-965778 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 29 13:52:04 no-preload-965778 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 29 13:52:04 no-preload-965778 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 29 13:51:00 no-preload-965778 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 13:51:00 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:00.627506302Z" level=info msg="Starting up"
	Jul 29 13:51:00 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:00.629138475Z" level=info msg="containerd not running, starting managed containerd"
	Jul 29 13:51:00 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:00.630146803Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=536
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.658171476Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.682441609Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.682592824Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.682682986Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.682723536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.682851910Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.682972952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.683241537Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.683310771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.683358196Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.683393791Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.683505962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.683719400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.685810341Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.685959551Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.686230657Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.686306833Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.686422352Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.686495161Z" level=info msg="metadata content store policy set" policy=shared
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.699799385Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.700018632Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.701009763Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.701122471Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.701157636Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.701322472Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.701972360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702124211Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702159811Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702178133Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702193728Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702210850Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702230278Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702247043Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702263945Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702279337Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702293898Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702306886Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702331758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702348565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702363887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702379414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702393504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702408714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702420653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702440028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702456838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702473483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702487859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702501206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702514923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702535731Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702558768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702573120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702585579Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702638743Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702659881Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702698953Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702724259Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702747846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702783217Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.702810904Z" level=info msg="NRI interface is disabled by configuration."
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.704332729Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.704607880Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.704831155Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 29 13:51:00 no-preload-965778 dockerd[536]: time="2024-07-29T13:51:00.704976119Z" level=info msg="containerd successfully booted in 0.047789s"
	Jul 29 13:51:01 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:01.670730056Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 29 13:51:01 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:01.688200493Z" level=info msg="Loading containers: start."
	Jul 29 13:51:01 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:01.800631892Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 29 13:51:01 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:01.934133681Z" level=info msg="Loading containers: done."
	Jul 29 13:51:01 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:01.950396882Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 29 13:51:01 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:01.950555211Z" level=info msg="Daemon has completed initialization"
	Jul 29 13:51:02 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:02.028374157Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 29 13:51:02 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:02.028612453Z" level=info msg="API listen on [::]:2376"
	Jul 29 13:51:02 no-preload-965778 systemd[1]: Started Docker Application Container Engine.
	Jul 29 13:51:03 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:03.523458428Z" level=info msg="Processing signal 'terminated'"
	Jul 29 13:51:03 no-preload-965778 systemd[1]: Stopping Docker Application Container Engine...
	Jul 29 13:51:03 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:03.524910780Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 29 13:51:03 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:03.525491715Z" level=info msg="Daemon shutdown complete"
	Jul 29 13:51:03 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:03.525559617Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 29 13:51:03 no-preload-965778 dockerd[529]: time="2024-07-29T13:51:03.525626121Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 29 13:51:04 no-preload-965778 systemd[1]: docker.service: Deactivated successfully.
	Jul 29 13:51:04 no-preload-965778 systemd[1]: Stopped Docker Application Container Engine.
	Jul 29 13:51:04 no-preload-965778 systemd[1]: Starting Docker Application Container Engine...
	Jul 29 13:51:04 no-preload-965778 dockerd[844]: time="2024-07-29T13:51:04.571173609Z" level=info msg="Starting up"
	Jul 29 13:52:04 no-preload-965778 dockerd[844]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 29 13:52:04 no-preload-965778 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 29 13:52:04 no-preload-965778 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 29 13:52:04 no-preload-965778 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0729 13:52:04.870745  239535 out.go:239] * 
	* 
	W0729 13:52:04.871916  239535 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 13:52:04.910143  239535 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p no-preload-965778 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0-beta.0": exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-965778 -n no-preload-965778
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-965778 -n no-preload-965778: exit status 6 (268.885566ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 13:52:05.245899  243030 status.go:417] kubeconfig endpoint: get endpoint: "no-preload-965778" does not appear in /home/jenkins/minikube-integration/19338-179709/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "no-preload-965778" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (92.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-965778 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-965778 create -f testdata/busybox.yaml: exit status 1 (66.2838ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-965778" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-965778 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-965778 -n no-preload-965778
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-965778 -n no-preload-965778: exit status 6 (306.84974ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 13:52:05.622135  243070 status.go:417] kubeconfig endpoint: get endpoint: "no-preload-965778" does not appear in /home/jenkins/minikube-integration/19338-179709/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "no-preload-965778" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-965778 -n no-preload-965778
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-965778 -n no-preload-965778: exit status 6 (233.43796ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 13:52:05.856852  243100 status.go:417] kubeconfig endpoint: get endpoint: "no-preload-965778" does not appear in /home/jenkins/minikube-integration/19338-179709/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "no-preload-965778" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (59.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-965778 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0729 13:52:08.838500  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/auto-263785/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-965778 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (59.033372246s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: docker: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=<no value>: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-965778 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-965778 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-965778 describe deploy/metrics-server -n kube-system: exit status 1 (44.467484ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-965778" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-965778 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-965778 -n no-preload-965778
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-965778 -n no-preload-965778: exit status 6 (239.568696ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 13:53:05.174772  243999 status.go:417] kubeconfig endpoint: get endpoint: "no-preload-965778" does not appear in /home/jenkins/minikube-integration/19338-179709/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "no-preload-965778" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (59.32s)

                                                
                                    

Test pass (318/350)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.42
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.3/json-events 3.57
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.14
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-beta.0/json-events 3.15
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.57
31 TestOffline 71.19
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 226.58
38 TestAddons/serial/Volcano 42.6
40 TestAddons/serial/GCPAuth/Namespaces 0.12
42 TestAddons/parallel/Registry 14.97
43 TestAddons/parallel/Ingress 22.77
44 TestAddons/parallel/InspektorGadget 11.84
45 TestAddons/parallel/MetricsServer 5.7
46 TestAddons/parallel/HelmTiller 12.35
48 TestAddons/parallel/CSI 55.57
49 TestAddons/parallel/Headlamp 19.78
50 TestAddons/parallel/CloudSpanner 5.5
51 TestAddons/parallel/LocalPath 13.14
52 TestAddons/parallel/NvidiaDevicePlugin 5.48
53 TestAddons/parallel/Yakd 11.65
54 TestAddons/StoppedEnableDisable 13.6
55 TestCertOptions 65.26
56 TestCertExpiration 296.05
57 TestDockerFlags 75.05
58 TestForceSystemdFlag 67.49
59 TestForceSystemdEnv 106.97
61 TestKVMDriverInstallOrUpdate 4.26
65 TestErrorSpam/setup 48.6
66 TestErrorSpam/start 0.36
67 TestErrorSpam/status 0.74
68 TestErrorSpam/pause 1.21
69 TestErrorSpam/unpause 1.26
70 TestErrorSpam/stop 15.7
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 184.63
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 42.47
77 TestFunctional/serial/KubeContext 0.05
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.49
82 TestFunctional/serial/CacheCmd/cache/add_local 1.33
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.2
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 41.52
91 TestFunctional/serial/ComponentHealth 0.07
92 TestFunctional/serial/LogsCmd 0.96
93 TestFunctional/serial/LogsFileCmd 0.96
94 TestFunctional/serial/InvalidService 4.3
96 TestFunctional/parallel/ConfigCmd 0.32
97 TestFunctional/parallel/DashboardCmd 14.83
98 TestFunctional/parallel/DryRun 0.29
99 TestFunctional/parallel/InternationalLanguage 0.14
100 TestFunctional/parallel/StatusCmd 0.83
104 TestFunctional/parallel/ServiceCmdConnect 23.7
105 TestFunctional/parallel/AddonsCmd 0.13
106 TestFunctional/parallel/PersistentVolumeClaim 47.03
108 TestFunctional/parallel/SSHCmd 0.41
109 TestFunctional/parallel/CpCmd 1.35
110 TestFunctional/parallel/MySQL 28.03
111 TestFunctional/parallel/FileSync 0.21
112 TestFunctional/parallel/CertSync 1.38
116 TestFunctional/parallel/NodeLabels 0.07
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.22
120 TestFunctional/parallel/License 0.25
121 TestFunctional/parallel/Version/short 0.04
122 TestFunctional/parallel/Version/components 0.47
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.19
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.19
127 TestFunctional/parallel/ImageCommands/ImageBuild 2.73
128 TestFunctional/parallel/ImageCommands/Setup 1.54
129 TestFunctional/parallel/DockerEnv/bash 0.85
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.14
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 23.23
139 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.84
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.49
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
142 TestFunctional/parallel/ImageCommands/ImageRemove 0.42
143 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.78
144 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.45
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
146 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
150 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
151 TestFunctional/parallel/ServiceCmd/DeployApp 9.23
152 TestFunctional/parallel/ProfileCmd/profile_not_create 0.27
153 TestFunctional/parallel/ProfileCmd/profile_list 0.26
154 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
155 TestFunctional/parallel/MountCmd/any-port 13.59
156 TestFunctional/parallel/ServiceCmd/List 1.63
157 TestFunctional/parallel/ServiceCmd/JSONOutput 1.65
158 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
159 TestFunctional/parallel/ServiceCmd/Format 0.5
160 TestFunctional/parallel/ServiceCmd/URL 0.49
161 TestFunctional/parallel/MountCmd/specific-port 1.45
162 TestFunctional/parallel/MountCmd/VerifyCleanup 1.25
163 TestFunctional/delete_echo-server_images 0.04
164 TestFunctional/delete_my-image_image 0.02
165 TestFunctional/delete_minikube_cached_images 0.01
166 TestGvisorAddon 202.83
169 TestMultiControlPlane/serial/StartCluster 224.71
170 TestMultiControlPlane/serial/DeployApp 6.11
171 TestMultiControlPlane/serial/PingHostFromPods 1.25
172 TestMultiControlPlane/serial/AddWorkerNode 65.53
173 TestMultiControlPlane/serial/NodeLabels 0.07
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.52
175 TestMultiControlPlane/serial/CopyFile 12.55
176 TestMultiControlPlane/serial/StopSecondaryNode 13.22
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.39
178 TestMultiControlPlane/serial/RestartSecondaryNode 38.16
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.54
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 303.45
181 TestMultiControlPlane/serial/DeleteSecondaryNode 7.75
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
183 TestMultiControlPlane/serial/StopCluster 38.11
184 TestMultiControlPlane/serial/RestartCluster 122.79
185 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.37
186 TestMultiControlPlane/serial/AddSecondaryNode 82.42
187 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.56
190 TestImageBuild/serial/Setup 53.13
191 TestImageBuild/serial/NormalBuild 2
192 TestImageBuild/serial/BuildWithBuildArg 1.02
193 TestImageBuild/serial/BuildWithDockerIgnore 0.74
194 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.77
198 TestJSONOutput/start/Command 65.81
199 TestJSONOutput/start/Audit 0
201 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/pause/Command 0.59
205 TestJSONOutput/pause/Audit 0
207 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/unpause/Command 0.54
211 TestJSONOutput/unpause/Audit 0
213 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
216 TestJSONOutput/stop/Command 12.66
217 TestJSONOutput/stop/Audit 0
219 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
220 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
221 TestErrorJSONOutput 0.2
226 TestMainNoArgs 0.04
227 TestMinikubeProfile 103.12
230 TestMountStart/serial/StartWithMountFirst 30.79
231 TestMountStart/serial/VerifyMountFirst 0.37
232 TestMountStart/serial/StartWithMountSecond 27.79
233 TestMountStart/serial/VerifyMountSecond 0.38
234 TestMountStart/serial/DeleteFirst 0.7
235 TestMountStart/serial/VerifyMountPostDelete 0.39
236 TestMountStart/serial/Stop 2.28
237 TestMountStart/serial/RestartStopped 26.08
238 TestMountStart/serial/VerifyMountPostStop 0.37
241 TestMultiNode/serial/FreshStart2Nodes 137.4
242 TestMultiNode/serial/DeployApp2Nodes 4.01
243 TestMultiNode/serial/PingHostFrom2Pods 0.82
244 TestMultiNode/serial/AddNode 54.39
245 TestMultiNode/serial/MultiNodeLabels 0.06
246 TestMultiNode/serial/ProfileList 0.22
247 TestMultiNode/serial/CopyFile 7.28
248 TestMultiNode/serial/StopNode 3.39
249 TestMultiNode/serial/StartAfterStop 42.2
250 TestMultiNode/serial/RestartKeepsNodes 172.33
251 TestMultiNode/serial/DeleteNode 2.16
252 TestMultiNode/serial/StopMultiNode 25.81
253 TestMultiNode/serial/RestartMultiNode 115.94
254 TestMultiNode/serial/ValidateNameConflict 51.91
259 TestPreload 153.55
261 TestScheduledStopUnix 122.69
262 TestSkaffold 125.77
265 TestRunningBinaryUpgrade 200.29
267 TestKubernetesUpgrade 215.01
280 TestPause/serial/Start 107.07
289 TestStoppedBinaryUpgrade/Setup 0.42
290 TestStoppedBinaryUpgrade/Upgrade 140.84
291 TestPause/serial/SecondStartNoReconfiguration 56.23
292 TestPause/serial/Pause 0.59
293 TestPause/serial/VerifyStatus 0.25
294 TestPause/serial/Unpause 0.55
295 TestPause/serial/PauseAgain 0.71
296 TestPause/serial/DeletePaused 1.02
297 TestPause/serial/VerifyDeletedResources 17.66
299 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
300 TestNoKubernetes/serial/StartWithK8s 58.37
301 TestStoppedBinaryUpgrade/MinikubeLogs 1.1
302 TestNoKubernetes/serial/StartWithStopK8s 69.57
303 TestNoKubernetes/serial/Start 40.54
304 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
305 TestNoKubernetes/serial/ProfileList 0.92
306 TestNoKubernetes/serial/Stop 2.47
307 TestNoKubernetes/serial/StartNoArgs 62.17
308 TestNetworkPlugins/group/auto/Start 92.63
309 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
310 TestNetworkPlugins/group/kindnet/Start 138.03
311 TestNetworkPlugins/group/calico/Start 123.88
312 TestNetworkPlugins/group/auto/KubeletFlags 0.2
313 TestNetworkPlugins/group/auto/NetCatPod 11.24
314 TestNetworkPlugins/group/auto/DNS 0.16
315 TestNetworkPlugins/group/auto/Localhost 0.14
316 TestNetworkPlugins/group/auto/HairPin 0.15
317 TestNetworkPlugins/group/custom-flannel/Start 82.61
318 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
319 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
320 TestNetworkPlugins/group/kindnet/NetCatPod 10.27
321 TestNetworkPlugins/group/false/Start 83.9
322 TestNetworkPlugins/group/kindnet/DNS 0.24
323 TestNetworkPlugins/group/kindnet/Localhost 0.19
324 TestNetworkPlugins/group/kindnet/HairPin 0.23
325 TestNetworkPlugins/group/enable-default-cni/Start 86.94
326 TestNetworkPlugins/group/calico/ControllerPod 6.01
327 TestNetworkPlugins/group/calico/KubeletFlags 0.24
328 TestNetworkPlugins/group/calico/NetCatPod 15.27
329 TestNetworkPlugins/group/calico/DNS 0.2
330 TestNetworkPlugins/group/calico/Localhost 0.15
331 TestNetworkPlugins/group/calico/HairPin 0.14
332 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
333 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.27
334 TestNetworkPlugins/group/custom-flannel/DNS 0.19
335 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
336 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
337 TestNetworkPlugins/group/flannel/Start 82.88
338 TestNetworkPlugins/group/false/KubeletFlags 0.24
339 TestNetworkPlugins/group/false/NetCatPod 11.31
340 TestNetworkPlugins/group/bridge/Start 91.55
341 TestNetworkPlugins/group/false/DNS 0.15
342 TestNetworkPlugins/group/false/Localhost 0.14
343 TestNetworkPlugins/group/false/HairPin 0.14
344 TestNetworkPlugins/group/kubenet/Start 86.92
345 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
346 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.23
347 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
348 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
349 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
351 TestStartStop/group/old-k8s-version/serial/FirstStart 179.18
352 TestNetworkPlugins/group/flannel/ControllerPod 6.02
353 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
354 TestNetworkPlugins/group/flannel/NetCatPod 14.32
355 TestNetworkPlugins/group/flannel/DNS 0.18
356 TestNetworkPlugins/group/flannel/Localhost 0.2
357 TestNetworkPlugins/group/flannel/HairPin 0.2
358 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
359 TestNetworkPlugins/group/bridge/NetCatPod 11.26
360 TestNetworkPlugins/group/bridge/DNS 0.17
361 TestNetworkPlugins/group/bridge/Localhost 0.13
362 TestNetworkPlugins/group/bridge/HairPin 0.14
365 TestNetworkPlugins/group/kubenet/KubeletFlags 0.22
366 TestNetworkPlugins/group/kubenet/NetCatPod 10.29
368 TestStartStop/group/embed-certs/serial/FirstStart 82.46
369 TestNetworkPlugins/group/kubenet/DNS 0.16
370 TestNetworkPlugins/group/kubenet/Localhost 0.14
371 TestNetworkPlugins/group/kubenet/HairPin 0.13
373 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 92.25
376 TestStartStop/group/embed-certs/serial/DeployApp 8.32
377 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.91
378 TestStartStop/group/embed-certs/serial/Stop 13.32
379 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
380 TestStartStop/group/embed-certs/serial/SecondStart 428.97
381 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.3
382 TestStartStop/group/old-k8s-version/serial/DeployApp 8.53
383 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.88
384 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.62
385 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.91
386 TestStartStop/group/old-k8s-version/serial/Stop 13.32
387 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
388 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 388.25
389 TestStartStop/group/no-preload/serial/Stop 61.4
390 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
391 TestStartStop/group/old-k8s-version/serial/SecondStart 425.14
392 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
393 TestStartStop/group/no-preload/serial/SecondStart 70.95
394 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 7.01
395 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
396 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.2
397 TestStartStop/group/no-preload/serial/Pause 2.4
399 TestStartStop/group/newest-cni/serial/FirstStart 60.6
400 TestStartStop/group/newest-cni/serial/DeployApp 0
401 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.88
402 TestStartStop/group/newest-cni/serial/Stop 12.67
403 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
404 TestStartStop/group/newest-cni/serial/SecondStart 39.6
405 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
406 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
407 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
408 TestStartStop/group/newest-cni/serial/Pause 2.55
409 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 13.01
410 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 9.01
411 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
412 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.2
413 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.39
414 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
415 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.2
416 TestStartStop/group/embed-certs/serial/Pause 2.36
417 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
418 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
419 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
420 TestStartStop/group/old-k8s-version/serial/Pause 2.28
x
+
TestDownloadOnly/v1.20.0/json-events (6.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-792595 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-792595 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (6.419001214s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-792595
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-792595: exit status 85 (63.141158ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-792595 | jenkins | v1.33.1 | 29 Jul 24 12:47 UTC |          |
	|         | -p download-only-792595        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:47:10
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:47:10.381984  186963 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:47:10.382278  186963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:47:10.382288  186963 out.go:304] Setting ErrFile to fd 2...
	I0729 12:47:10.382294  186963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:47:10.382492  186963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-179709/.minikube/bin
	W0729 12:47:10.382624  186963 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19338-179709/.minikube/config/config.json: open /home/jenkins/minikube-integration/19338-179709/.minikube/config/config.json: no such file or directory
	I0729 12:47:10.383236  186963 out.go:298] Setting JSON to true
	I0729 12:47:10.384221  186963 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8981,"bootTime":1722248249,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:47:10.384291  186963 start.go:139] virtualization: kvm guest
	I0729 12:47:10.386941  186963 out.go:97] [download-only-792595] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0729 12:47:10.387058  186963 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19338-179709/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 12:47:10.387119  186963 notify.go:220] Checking for updates...
	I0729 12:47:10.388581  186963 out.go:169] MINIKUBE_LOCATION=19338
	I0729 12:47:10.390112  186963 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:47:10.391630  186963 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19338-179709/kubeconfig
	I0729 12:47:10.393074  186963 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-179709/.minikube
	I0729 12:47:10.394505  186963 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 12:47:10.397236  186963 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 12:47:10.397508  186963 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:47:10.433826  186963 out.go:97] Using the kvm2 driver based on user configuration
	I0729 12:47:10.433855  186963 start.go:297] selected driver: kvm2
	I0729 12:47:10.433861  186963 start.go:901] validating driver "kvm2" against <nil>
	I0729 12:47:10.434247  186963 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:47:10.434346  186963 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19338-179709/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:47:10.452202  186963 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:47:10.452276  186963 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 12:47:10.452832  186963 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 12:47:10.452995  186963 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 12:47:10.453058  186963 cni.go:84] Creating CNI manager for ""
	I0729 12:47:10.453075  186963 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 12:47:10.453141  186963 start.go:340] cluster config:
	{Name:download-only-792595 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-792595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:47:10.453320  186963 iso.go:125] acquiring lock: {Name:mkba981b31daf918fe5bcf2915c3bde7a7b27504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:47:10.455311  186963 out.go:97] Downloading VM boot image ...
	I0729 12:47:10.455353  186963 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19338-179709/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 12:47:13.069852  186963 out.go:97] Starting "download-only-792595" primary control-plane node in "download-only-792595" cluster
	I0729 12:47:13.069894  186963 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 12:47:13.100637  186963 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0729 12:47:13.100682  186963 cache.go:56] Caching tarball of preloaded images
	I0729 12:47:13.100838  186963 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 12:47:13.102940  186963 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 12:47:13.102971  186963 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0729 12:47:13.130879  186963 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19338-179709/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-792595 host does not exist
	  To start a cluster, run: "minikube start -p download-only-792595"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-792595
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (3.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-351880 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-351880 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=kvm2 : (3.564974877s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (3.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-351880
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-351880: exit status 85 (60.132834ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-792595 | jenkins | v1.33.1 | 29 Jul 24 12:47 UTC |                     |
	|         | -p download-only-792595        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Jul 24 12:47 UTC | 29 Jul 24 12:47 UTC |
	| delete  | -p download-only-792595        | download-only-792595 | jenkins | v1.33.1 | 29 Jul 24 12:47 UTC | 29 Jul 24 12:47 UTC |
	| start   | -o=json --download-only        | download-only-351880 | jenkins | v1.33.1 | 29 Jul 24 12:47 UTC |                     |
	|         | -p download-only-351880        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:47:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:47:17.130715  187151 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:47:17.130815  187151 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:47:17.130820  187151 out.go:304] Setting ErrFile to fd 2...
	I0729 12:47:17.130824  187151 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:47:17.130995  187151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-179709/.minikube/bin
	I0729 12:47:17.131558  187151 out.go:298] Setting JSON to true
	I0729 12:47:17.132416  187151 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8988,"bootTime":1722248249,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:47:17.132489  187151 start.go:139] virtualization: kvm guest
	I0729 12:47:17.134604  187151 out.go:97] [download-only-351880] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:47:17.134789  187151 notify.go:220] Checking for updates...
	I0729 12:47:17.136037  187151 out.go:169] MINIKUBE_LOCATION=19338
	I0729 12:47:17.137599  187151 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:47:17.138878  187151 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19338-179709/kubeconfig
	I0729 12:47:17.140512  187151 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-179709/.minikube
	I0729 12:47:17.141695  187151 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-351880 host does not exist
	  To start a cluster, run: "minikube start -p download-only-351880"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-351880
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (3.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-144186 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-144186 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=kvm2 : (3.145090166s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (3.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-144186
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-144186: exit status 85 (59.310498ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-792595 | jenkins | v1.33.1 | 29 Jul 24 12:47 UTC |                     |
	|         | -p download-only-792595             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 12:47 UTC | 29 Jul 24 12:47 UTC |
	| delete  | -p download-only-792595             | download-only-792595 | jenkins | v1.33.1 | 29 Jul 24 12:47 UTC | 29 Jul 24 12:47 UTC |
	| start   | -o=json --download-only             | download-only-351880 | jenkins | v1.33.1 | 29 Jul 24 12:47 UTC |                     |
	|         | -p download-only-351880             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 12:47 UTC | 29 Jul 24 12:47 UTC |
	| delete  | -p download-only-351880             | download-only-351880 | jenkins | v1.33.1 | 29 Jul 24 12:47 UTC | 29 Jul 24 12:47 UTC |
	| start   | -o=json --download-only             | download-only-144186 | jenkins | v1.33.1 | 29 Jul 24 12:47 UTC |                     |
	|         | -p download-only-144186             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:47:21
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:47:21.018818  187338 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:47:21.019059  187338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:47:21.019067  187338 out.go:304] Setting ErrFile to fd 2...
	I0729 12:47:21.019071  187338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:47:21.019250  187338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-179709/.minikube/bin
	I0729 12:47:21.019812  187338 out.go:298] Setting JSON to true
	I0729 12:47:21.020723  187338 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8992,"bootTime":1722248249,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:47:21.020796  187338 start.go:139] virtualization: kvm guest
	I0729 12:47:21.022940  187338 out.go:97] [download-only-144186] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:47:21.023128  187338 notify.go:220] Checking for updates...
	I0729 12:47:21.024638  187338 out.go:169] MINIKUBE_LOCATION=19338
	I0729 12:47:21.026167  187338 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:47:21.027533  187338 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19338-179709/kubeconfig
	I0729 12:47:21.028920  187338 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-179709/.minikube
	I0729 12:47:21.030331  187338 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-144186 host does not exist
	  To start a cluster, run: "minikube start -p download-only-144186"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-144186
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-058680 --alsologtostderr --binary-mirror http://127.0.0.1:35093 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-058680" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-058680
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (71.19s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-866861 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-866861 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m10.205125192s)
helpers_test.go:175: Cleaning up "offline-docker-866861" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-866861
--- PASS: TestOffline (71.19s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-543070
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-543070: exit status 85 (51.161975ms)

                                                
                                                
-- stdout --
	* Profile "addons-543070" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-543070"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-543070
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-543070: exit status 85 (48.873646ms)

                                                
                                                
-- stdout --
	* Profile "addons-543070" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-543070"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (226.58s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-543070 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-543070 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m46.583226668s)
--- PASS: TestAddons/Setup (226.58s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.6s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 17.675807ms
addons_test.go:897: volcano-scheduler stabilized in 17.764006ms
addons_test.go:905: volcano-admission stabilized in 17.818535ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-m98hj" [2edadf9b-8b71-45f2-9b15-c518b1646eb0] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004847711s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-chkd2" [33354cbd-dac5-4ae2-bc1f-6db62317857f] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004341872s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-9scww" [ddf32222-7498-422e-a1ec-fdea7514a39f] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.004071592s
addons_test.go:932: (dbg) Run:  kubectl --context addons-543070 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-543070 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-543070 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [77fa37c9-1357-4e7a-b2e0-15e6d628a1b5] Pending
helpers_test.go:344: "test-job-nginx-0" [77fa37c9-1357-4e7a-b2e0-15e6d628a1b5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [77fa37c9-1357-4e7a-b2e0-15e6d628a1b5] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 16.00385492s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-543070 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-543070 addons disable volcano --alsologtostderr -v=1: (10.19292483s)
--- PASS: TestAddons/serial/Volcano (42.60s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-543070 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-543070 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.326506ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-kj9ps" [dd2fa3ae-2b37-4c97-8ad1-db4cb69a47ec] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00456354s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-frxqw" [7f601b48-5264-4264-9673-7122e299169b] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005065019s
addons_test.go:342: (dbg) Run:  kubectl --context addons-543070 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-543070 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-543070 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.200968625s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-543070 ip
2024/07/29 12:52:26 [DEBUG] GET http://192.168.39.238:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-543070 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.97s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-543070 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-543070 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-543070 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [723e08ab-cc7c-481d-a6cb-bdc8d1812564] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [723e08ab-cc7c-481d-a6cb-bdc8d1812564] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.003711048s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-543070 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-543070 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-543070 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.238
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-543070 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-543070 addons disable ingress-dns --alsologtostderr -v=1: (1.448311955s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-543070 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-543070 addons disable ingress --alsologtostderr -v=1: (7.799450095s)
--- PASS: TestAddons/parallel/Ingress (22.77s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-j8lmj" [395efafe-6cd1-40ae-95c7-e11582a9a377] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004385018s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-543070
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-543070: (5.831049175s)
--- PASS: TestAddons/parallel/InspektorGadget (11.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.502295ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-wm5zq" [c5b09037-3301-4ba6-8754-a21e726f68aa] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006027273s
addons_test.go:417: (dbg) Run:  kubectl --context addons-543070 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-543070 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.70s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.35s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 3.236462ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-d84rm" [b048b454-fbf6-4dd0-8401-73105686a87f] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.005825058s
addons_test.go:475: (dbg) Run:  kubectl --context addons-543070 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-543070 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.803313322s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-543070 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.35s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 8.334979ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-543070 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-543070 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ca5c6566-70ec-4121-8bf0-980ed8275ac0] Pending
helpers_test.go:344: "task-pv-pod" [ca5c6566-70ec-4121-8bf0-980ed8275ac0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ca5c6566-70ec-4121-8bf0-980ed8275ac0] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.012702181s
addons_test.go:590: (dbg) Run:  kubectl --context addons-543070 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-543070 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-543070 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-543070 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-543070 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-543070 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-543070 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f93e6c52-7cb9-4696-bae3-dd483abd7d67] Pending
helpers_test.go:344: "task-pv-pod-restore" [f93e6c52-7cb9-4696-bae3-dd483abd7d67] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f93e6c52-7cb9-4696-bae3-dd483abd7d67] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.004118307s
addons_test.go:632: (dbg) Run:  kubectl --context addons-543070 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-543070 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-543070 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-543070 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-543070 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.700535112s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-543070 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (55.57s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-543070 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-b9jgz" [1d68f3ab-72d4-46e2-b789-d0af9c8b265e] Pending
helpers_test.go:344: "headlamp-7867546754-b9jgz" [1d68f3ab-72d4-46e2-b789-d0af9c8b265e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-b9jgz" [1d68f3ab-72d4-46e2-b789-d0af9c8b265e] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004106187s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-543070 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-543070 addons disable headlamp --alsologtostderr -v=1: (5.833185479s)
--- PASS: TestAddons/parallel/Headlamp (19.78s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-b9tz8" [cd44761d-85c6-4c37-a3ad-317be9ec3ff5] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005308393s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-543070
--- PASS: TestAddons/parallel/CloudSpanner (5.50s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (13.14s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-543070 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-543070 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-543070 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8bcaa442-47b7-40d3-94d5-6ba0d9ac575c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8bcaa442-47b7-40d3-94d5-6ba0d9ac575c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8bcaa442-47b7-40d3-94d5-6ba0d9ac575c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003746742s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-543070 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-543070 ssh "cat /opt/local-path-provisioner/pvc-cb35a016-7d2e-4393-8e9e-ca6e513c7cfc_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-543070 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-543070 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-543070 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (13.14s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-l5hwb" [36024b9c-ba13-48f4-b345-cdd9e42f7556] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00601116s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-543070
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-mnrnm" [4129dc3a-c67b-49ac-a80c-3caebd3a9a0a] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004647214s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-543070 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-543070 addons disable yakd --alsologtostderr -v=1: (5.640933725s)
--- PASS: TestAddons/parallel/Yakd (11.65s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.6s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-543070
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-543070: (13.329465276s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-543070
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-543070
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-543070
--- PASS: TestAddons/StoppedEnableDisable (13.60s)

                                                
                                    
x
+
TestCertOptions (65.26s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-267203 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-267203 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m3.707373651s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-267203 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-267203 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-267203 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-267203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-267203
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-267203: (1.048949749s)
--- PASS: TestCertOptions (65.26s)

                                                
                                    
x
+
TestCertExpiration (296.05s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-317417 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-317417 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m24.474057651s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-317417 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-317417 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (30.478888678s)
helpers_test.go:175: Cleaning up "cert-expiration-317417" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-317417
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-317417: (1.096326581s)
--- PASS: TestCertExpiration (296.05s)

                                                
                                    
x
+
TestDockerFlags (75.05s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-658154 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-658154 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m13.399952815s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-658154 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-658154 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-658154" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-658154
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-658154: (1.161349776s)
--- PASS: TestDockerFlags (75.05s)

                                                
                                    
x
+
TestForceSystemdFlag (67.49s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-386213 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-386213 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m6.177981466s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-386213 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-386213" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-386213
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-386213: (1.052083221s)
--- PASS: TestForceSystemdFlag (67.49s)

                                                
                                    
x
+
TestForceSystemdEnv (106.97s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-402483 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-402483 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m45.467529996s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-402483 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-402483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-402483
E0729 13:43:27.163944  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/skaffold-110737/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-402483: (1.160404799s)
--- PASS: TestForceSystemdEnv (106.97s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.26s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.26s)

                                                
                                    
x
+
TestErrorSpam/setup (48.6s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-773538 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-773538 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-773538 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-773538 --driver=kvm2 : (48.603169709s)
--- PASS: TestErrorSpam/setup (48.60s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-773538 --log_dir /tmp/nospam-773538 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-773538 --log_dir /tmp/nospam-773538 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-773538 --log_dir /tmp/nospam-773538 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-773538 --log_dir /tmp/nospam-773538 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-773538 --log_dir /tmp/nospam-773538 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-773538 --log_dir /tmp/nospam-773538 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.21s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-773538 --log_dir /tmp/nospam-773538 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-773538 --log_dir /tmp/nospam-773538 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-773538 --log_dir /tmp/nospam-773538 pause
--- PASS: TestErrorSpam/pause (1.21s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.26s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-773538 --log_dir /tmp/nospam-773538 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-773538 --log_dir /tmp/nospam-773538 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-773538 --log_dir /tmp/nospam-773538 unpause
--- PASS: TestErrorSpam/unpause (1.26s)

                                                
                                    
x
+
TestErrorSpam/stop (15.7s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-773538 --log_dir /tmp/nospam-773538 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-773538 --log_dir /tmp/nospam-773538 stop: (12.453628799s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-773538 --log_dir /tmp/nospam-773538 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-773538 --log_dir /tmp/nospam-773538 stop: (1.84326724s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-773538 --log_dir /tmp/nospam-773538 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-773538 --log_dir /tmp/nospam-773538 stop: (1.407471407s)
--- PASS: TestErrorSpam/stop (15.70s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19338-179709/.minikube/files/etc/test/nested/copy/186951/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (184.63s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-673428 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
E0729 12:56:12.026898  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
E0729 12:56:12.032686  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
E0729 12:56:12.043019  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
E0729 12:56:12.063415  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
E0729 12:56:12.103784  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
E0729 12:56:12.184184  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
E0729 12:56:12.344672  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
E0729 12:56:12.665397  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
E0729 12:56:13.306471  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
E0729 12:56:14.587050  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
E0729 12:56:17.148014  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
E0729 12:56:22.268809  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
E0729 12:56:32.509012  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
E0729 12:56:52.989317  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
E0729 12:57:33.950413  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-673428 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (3m4.632264058s)
--- PASS: TestFunctional/serial/StartWithProxy (184.63s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.47s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-673428 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-673428 --alsologtostderr -v=8: (42.47155788s)
functional_test.go:659: soft start took 42.472185281s for "functional-673428" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.47s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-673428 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-673428 /tmp/TestFunctionalserialCacheCmdcacheadd_local3161025655/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 cache add minikube-local-cache-test:functional-673428
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-673428 cache add minikube-local-cache-test:functional-673428: (1.005093898s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 cache delete minikube-local-cache-test:functional-673428
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-673428
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-673428 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (209.673315ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 kubectl -- --context functional-673428 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-673428 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.52s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-673428 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0729 12:58:55.874400  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-673428 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.52369725s)
functional_test.go:757: restart took 41.523863045s for "functional-673428" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.52s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-673428 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 logs
--- PASS: TestFunctional/serial/LogsCmd (0.96s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 logs --file /tmp/TestFunctionalserialLogsFileCmd1085488592/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.96s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.3s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-673428 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-673428
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-673428: exit status 115 (280.019054ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.244:32218 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-673428 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.30s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-673428 config get cpus: exit status 14 (47.357157ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-673428 config get cpus: exit status 14 (51.613123ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-673428 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-673428 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 195384: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.83s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-673428 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-673428 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (146.293702ms)

                                                
                                                
-- stdout --
	* [functional-673428] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19338
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19338-179709/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-179709/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 12:59:39.765877  195268 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:59:39.766211  195268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:59:39.766227  195268 out.go:304] Setting ErrFile to fd 2...
	I0729 12:59:39.766234  195268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:59:39.766517  195268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-179709/.minikube/bin
	I0729 12:59:39.767224  195268 out.go:298] Setting JSON to false
	I0729 12:59:39.768482  195268 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9731,"bootTime":1722248249,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:59:39.768573  195268 start.go:139] virtualization: kvm guest
	I0729 12:59:39.770723  195268 out.go:177] * [functional-673428] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:59:39.771946  195268 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 12:59:39.771969  195268 notify.go:220] Checking for updates...
	I0729 12:59:39.774188  195268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:59:39.775627  195268 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-179709/kubeconfig
	I0729 12:59:39.776831  195268 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-179709/.minikube
	I0729 12:59:39.777922  195268 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:59:39.779080  195268 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:59:39.780504  195268 config.go:182] Loaded profile config "functional-673428": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 12:59:39.781011  195268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 12:59:39.781100  195268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:59:39.796138  195268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32949
	I0729 12:59:39.796577  195268 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:59:39.797144  195268 main.go:141] libmachine: Using API Version  1
	I0729 12:59:39.797169  195268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:59:39.797573  195268 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:59:39.797803  195268 main.go:141] libmachine: (functional-673428) Calling .DriverName
	I0729 12:59:39.798099  195268 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:59:39.798499  195268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 12:59:39.798556  195268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:59:39.813204  195268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36075
	I0729 12:59:39.813638  195268 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:59:39.814174  195268 main.go:141] libmachine: Using API Version  1
	I0729 12:59:39.814200  195268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:59:39.814513  195268 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:59:39.814761  195268 main.go:141] libmachine: (functional-673428) Calling .DriverName
	I0729 12:59:39.852693  195268 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 12:59:39.853774  195268 start.go:297] selected driver: kvm2
	I0729 12:59:39.853793  195268 start.go:901] validating driver "kvm2" against &{Name:functional-673428 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-673428 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:59:39.853894  195268 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:59:39.855842  195268 out.go:177] 
	W0729 12:59:39.857046  195268 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0729 12:59:39.858190  195268 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-673428 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-673428 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-673428 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (143.858973ms)

                                                
                                                
-- stdout --
	* [functional-673428] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19338
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19338-179709/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-179709/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 12:59:39.616732  195240 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:59:39.616867  195240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:59:39.616878  195240 out.go:304] Setting ErrFile to fd 2...
	I0729 12:59:39.616885  195240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:59:39.617188  195240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-179709/.minikube/bin
	I0729 12:59:39.617738  195240 out.go:298] Setting JSON to false
	I0729 12:59:39.618669  195240 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9731,"bootTime":1722248249,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:59:39.618742  195240 start.go:139] virtualization: kvm guest
	I0729 12:59:39.621174  195240 out.go:177] * [functional-673428] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0729 12:59:39.622608  195240 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 12:59:39.622662  195240 notify.go:220] Checking for updates...
	I0729 12:59:39.625516  195240 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:59:39.626920  195240 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19338-179709/kubeconfig
	I0729 12:59:39.628186  195240 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-179709/.minikube
	I0729 12:59:39.629592  195240 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:59:39.630849  195240 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:59:39.632495  195240 config.go:182] Loaded profile config "functional-673428": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 12:59:39.632885  195240 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 12:59:39.632929  195240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:59:39.647601  195240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39351
	I0729 12:59:39.648170  195240 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:59:39.648715  195240 main.go:141] libmachine: Using API Version  1
	I0729 12:59:39.648739  195240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:59:39.649231  195240 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:59:39.649471  195240 main.go:141] libmachine: (functional-673428) Calling .DriverName
	I0729 12:59:39.649740  195240 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:59:39.650082  195240 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 12:59:39.650127  195240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:59:39.669441  195240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34901
	I0729 12:59:39.669951  195240 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:59:39.670507  195240 main.go:141] libmachine: Using API Version  1
	I0729 12:59:39.670538  195240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:59:39.670879  195240 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:59:39.671070  195240 main.go:141] libmachine: (functional-673428) Calling .DriverName
	I0729 12:59:39.705046  195240 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0729 12:59:39.706344  195240 start.go:297] selected driver: kvm2
	I0729 12:59:39.706359  195240 start.go:901] validating driver "kvm2" against &{Name:functional-673428 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-673428 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:59:39.706487  195240 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:59:39.709579  195240 out.go:177] 
	W0729 12:59:39.710984  195240 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0729 12:59:39.712337  195240 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (23.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-673428 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-673428 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-78hvb" [d42afdb5-ceba-4115-8cc3-02af4caa87df] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-78hvb" [d42afdb5-ceba-4115-8cc3-02af4caa87df] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 23.028342674s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.244:30676
functional_test.go:1671: http://192.168.39.244:30676: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-78hvb

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.244:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.244:30676
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (23.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1370a5f3-8830-4145-8758-a25e9d355f1a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005595025s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-673428 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-673428 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-673428 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-673428 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-673428 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [20cb4c95-cd5c-40a6-b39f-14ca3714c997] Pending
helpers_test.go:344: "sp-pod" [20cb4c95-cd5c-40a6-b39f-14ca3714c997] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [20cb4c95-cd5c-40a6-b39f-14ca3714c997] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.005391784s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-673428 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-673428 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-673428 delete -f testdata/storage-provisioner/pod.yaml: (1.565163827s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-673428 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2fb9ed4d-8965-49d4-b9af-9ae70a673aee] Pending
helpers_test.go:344: "sp-pod" [2fb9ed4d-8965-49d4-b9af-9ae70a673aee] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2fb9ed4d-8965-49d4-b9af-9ae70a673aee] Running
2024/07/29 12:59:54 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003814747s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-673428 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh -n functional-673428 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 cp functional-673428:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1635538854/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh -n functional-673428 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh -n functional-673428 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-673428 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-hc2lj" [d74a5b4a-bb5e-49b7-8db3-e9a5fb7a3db6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-hc2lj" [d74a5b4a-bb5e-49b7-8db3-e9a5fb7a3db6] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.003810553s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-673428 exec mysql-64454c8b5c-hc2lj -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-673428 exec mysql-64454c8b5c-hc2lj -- mysql -ppassword -e "show databases;": exit status 1 (170.989235ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-673428 exec mysql-64454c8b5c-hc2lj -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-673428 exec mysql-64454c8b5c-hc2lj -- mysql -ppassword -e "show databases;": exit status 1 (215.166502ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-673428 exec mysql-64454c8b5c-hc2lj -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-673428 exec mysql-64454c8b5c-hc2lj -- mysql -ppassword -e "show databases;": exit status 1 (505.7214ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-673428 exec mysql-64454c8b5c-hc2lj -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.03s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/186951/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh "sudo cat /etc/test/nested/copy/186951/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/186951.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh "sudo cat /etc/ssl/certs/186951.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/186951.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh "sudo cat /usr/share/ca-certificates/186951.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/1869512.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh "sudo cat /etc/ssl/certs/1869512.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/1869512.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh "sudo cat /usr/share/ca-certificates/1869512.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-673428 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-673428 ssh "sudo systemctl is-active crio": exit status 1 (223.733056ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-673428 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-673428
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-673428
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-673428 image ls --format short --alsologtostderr:
I0729 12:59:52.715023  196131 out.go:291] Setting OutFile to fd 1 ...
I0729 12:59:52.715173  196131 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 12:59:52.715185  196131 out.go:304] Setting ErrFile to fd 2...
I0729 12:59:52.715191  196131 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 12:59:52.715456  196131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-179709/.minikube/bin
I0729 12:59:52.716288  196131 config.go:182] Loaded profile config "functional-673428": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 12:59:52.716437  196131 config.go:182] Loaded profile config "functional-673428": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 12:59:52.717041  196131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0729 12:59:52.717102  196131 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 12:59:52.734059  196131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36793
I0729 12:59:52.734562  196131 main.go:141] libmachine: () Calling .GetVersion
I0729 12:59:52.735143  196131 main.go:141] libmachine: Using API Version  1
I0729 12:59:52.735171  196131 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 12:59:52.735487  196131 main.go:141] libmachine: () Calling .GetMachineName
I0729 12:59:52.735667  196131 main.go:141] libmachine: (functional-673428) Calling .GetState
I0729 12:59:52.737599  196131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0729 12:59:52.737643  196131 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 12:59:52.752745  196131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43765
I0729 12:59:52.753286  196131 main.go:141] libmachine: () Calling .GetVersion
I0729 12:59:52.753876  196131 main.go:141] libmachine: Using API Version  1
I0729 12:59:52.753911  196131 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 12:59:52.754297  196131 main.go:141] libmachine: () Calling .GetMachineName
I0729 12:59:52.754524  196131 main.go:141] libmachine: (functional-673428) Calling .DriverName
I0729 12:59:52.754763  196131 ssh_runner.go:195] Run: systemctl --version
I0729 12:59:52.754794  196131 main.go:141] libmachine: (functional-673428) Calling .GetSSHHostname
I0729 12:59:52.757706  196131 main.go:141] libmachine: (functional-673428) DBG | domain functional-673428 has defined MAC address 52:54:00:14:7a:2e in network mk-functional-673428
I0729 12:59:52.758083  196131 main.go:141] libmachine: (functional-673428) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:7a:2e", ip: ""} in network mk-functional-673428: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:44 +0000 UTC Type:0 Mac:52:54:00:14:7a:2e Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:functional-673428 Clientid:01:52:54:00:14:7a:2e}
I0729 12:59:52.758118  196131 main.go:141] libmachine: (functional-673428) DBG | domain functional-673428 has defined IP address 192.168.39.244 and MAC address 52:54:00:14:7a:2e in network mk-functional-673428
I0729 12:59:52.758233  196131 main.go:141] libmachine: (functional-673428) Calling .GetSSHPort
I0729 12:59:52.758419  196131 main.go:141] libmachine: (functional-673428) Calling .GetSSHKeyPath
I0729 12:59:52.758614  196131 main.go:141] libmachine: (functional-673428) Calling .GetSSHUsername
I0729 12:59:52.758779  196131 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-179709/.minikube/machines/functional-673428/id_rsa Username:docker}
I0729 12:59:52.853146  196131 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0729 12:59:52.900709  196131 main.go:141] libmachine: Making call to close driver server
I0729 12:59:52.900726  196131 main.go:141] libmachine: (functional-673428) Calling .Close
I0729 12:59:52.901088  196131 main.go:141] libmachine: Successfully made call to close driver server
I0729 12:59:52.901119  196131 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 12:59:52.901128  196131 main.go:141] libmachine: Making call to close driver server
I0729 12:59:52.901136  196131 main.go:141] libmachine: (functional-673428) Calling .Close
I0729 12:59:52.901399  196131 main.go:141] libmachine: Successfully made call to close driver server
I0729 12:59:52.901414  196131 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-673428 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | alpine            | 1ae23480369fa | 43.2MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/minikube-local-cache-test | functional-673428 | 32a83e8eba577 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 76932a3b37d7e | 111MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kicbase/echo-server               | functional-673428 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-scheduler              | v1.30.3           | 3edc18e7b7672 | 62MB   |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 1f6d574d502f3 | 117MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 55bb025d2cfa5 | 84.7MB |
| docker.io/library/nginx                     | latest            | a72860cb95fd5 | 188MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-673428 image ls --format table --alsologtostderr:
I0729 12:59:55.067416  196275 out.go:291] Setting OutFile to fd 1 ...
I0729 12:59:55.067716  196275 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 12:59:55.067730  196275 out.go:304] Setting ErrFile to fd 2...
I0729 12:59:55.067737  196275 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 12:59:55.067988  196275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-179709/.minikube/bin
I0729 12:59:55.068603  196275 config.go:182] Loaded profile config "functional-673428": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 12:59:55.068768  196275 config.go:182] Loaded profile config "functional-673428": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 12:59:55.069290  196275 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0729 12:59:55.069349  196275 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 12:59:55.084665  196275 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43103
I0729 12:59:55.085187  196275 main.go:141] libmachine: () Calling .GetVersion
I0729 12:59:55.085835  196275 main.go:141] libmachine: Using API Version  1
I0729 12:59:55.085862  196275 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 12:59:55.086276  196275 main.go:141] libmachine: () Calling .GetMachineName
I0729 12:59:55.086513  196275 main.go:141] libmachine: (functional-673428) Calling .GetState
I0729 12:59:55.088343  196275 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0729 12:59:55.088389  196275 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 12:59:55.103091  196275 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37285
I0729 12:59:55.103546  196275 main.go:141] libmachine: () Calling .GetVersion
I0729 12:59:55.104009  196275 main.go:141] libmachine: Using API Version  1
I0729 12:59:55.104035  196275 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 12:59:55.104417  196275 main.go:141] libmachine: () Calling .GetMachineName
I0729 12:59:55.104600  196275 main.go:141] libmachine: (functional-673428) Calling .DriverName
I0729 12:59:55.104831  196275 ssh_runner.go:195] Run: systemctl --version
I0729 12:59:55.104860  196275 main.go:141] libmachine: (functional-673428) Calling .GetSSHHostname
I0729 12:59:55.107499  196275 main.go:141] libmachine: (functional-673428) DBG | domain functional-673428 has defined MAC address 52:54:00:14:7a:2e in network mk-functional-673428
I0729 12:59:55.107941  196275 main.go:141] libmachine: (functional-673428) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:7a:2e", ip: ""} in network mk-functional-673428: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:44 +0000 UTC Type:0 Mac:52:54:00:14:7a:2e Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:functional-673428 Clientid:01:52:54:00:14:7a:2e}
I0729 12:59:55.107972  196275 main.go:141] libmachine: (functional-673428) DBG | domain functional-673428 has defined IP address 192.168.39.244 and MAC address 52:54:00:14:7a:2e in network mk-functional-673428
I0729 12:59:55.108110  196275 main.go:141] libmachine: (functional-673428) Calling .GetSSHPort
I0729 12:59:55.108295  196275 main.go:141] libmachine: (functional-673428) Calling .GetSSHKeyPath
I0729 12:59:55.108442  196275 main.go:141] libmachine: (functional-673428) Calling .GetSSHUsername
I0729 12:59:55.108604  196275 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-179709/.minikube/machines/functional-673428/id_rsa Username:docker}
I0729 12:59:55.192509  196275 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0729 12:59:55.227433  196275 main.go:141] libmachine: Making call to close driver server
I0729 12:59:55.227461  196275 main.go:141] libmachine: (functional-673428) Calling .Close
I0729 12:59:55.227756  196275 main.go:141] libmachine: Successfully made call to close driver server
I0729 12:59:55.227777  196275 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 12:59:55.227786  196275 main.go:141] libmachine: Making call to close driver server
I0729 12:59:55.227795  196275 main.go:141] libmachine: (functional-673428) Calling .Close
I0729 12:59:55.228009  196275 main.go:141] libmachine: (functional-673428) DBG | Closing plugin on server side
I0729 12:59:55.228014  196275 main.go:141] libmachine: Successfully made call to close driver server
I0729 12:59:55.228028  196275 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-673428 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2
f0277e15b5e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"111000000"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"62000000"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"84700000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"32a83e8eba577e7cd6a7355527b79987c00be7eab9b72828c2421956eca8509e","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-673428"],"size":"30"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTa
gs":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-673428"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":
"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-673428 image ls --format json --alsologtostderr:
I0729 12:59:54.874030  196252 out.go:291] Setting OutFile to fd 1 ...
I0729 12:59:54.874127  196252 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 12:59:54.874135  196252 out.go:304] Setting ErrFile to fd 2...
I0729 12:59:54.874140  196252 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 12:59:54.874315  196252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-179709/.minikube/bin
I0729 12:59:54.874871  196252 config.go:182] Loaded profile config "functional-673428": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 12:59:54.874964  196252 config.go:182] Loaded profile config "functional-673428": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 12:59:54.875310  196252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0729 12:59:54.875352  196252 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 12:59:54.890899  196252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35463
I0729 12:59:54.891388  196252 main.go:141] libmachine: () Calling .GetVersion
I0729 12:59:54.891906  196252 main.go:141] libmachine: Using API Version  1
I0729 12:59:54.891927  196252 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 12:59:54.892303  196252 main.go:141] libmachine: () Calling .GetMachineName
I0729 12:59:54.892471  196252 main.go:141] libmachine: (functional-673428) Calling .GetState
I0729 12:59:54.894179  196252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0729 12:59:54.894218  196252 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 12:59:54.909209  196252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42327
I0729 12:59:54.909580  196252 main.go:141] libmachine: () Calling .GetVersion
I0729 12:59:54.910022  196252 main.go:141] libmachine: Using API Version  1
I0729 12:59:54.910044  196252 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 12:59:54.910378  196252 main.go:141] libmachine: () Calling .GetMachineName
I0729 12:59:54.910583  196252 main.go:141] libmachine: (functional-673428) Calling .DriverName
I0729 12:59:54.910807  196252 ssh_runner.go:195] Run: systemctl --version
I0729 12:59:54.910830  196252 main.go:141] libmachine: (functional-673428) Calling .GetSSHHostname
I0729 12:59:54.913358  196252 main.go:141] libmachine: (functional-673428) DBG | domain functional-673428 has defined MAC address 52:54:00:14:7a:2e in network mk-functional-673428
I0729 12:59:54.913811  196252 main.go:141] libmachine: (functional-673428) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:7a:2e", ip: ""} in network mk-functional-673428: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:44 +0000 UTC Type:0 Mac:52:54:00:14:7a:2e Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:functional-673428 Clientid:01:52:54:00:14:7a:2e}
I0729 12:59:54.913835  196252 main.go:141] libmachine: (functional-673428) DBG | domain functional-673428 has defined IP address 192.168.39.244 and MAC address 52:54:00:14:7a:2e in network mk-functional-673428
I0729 12:59:54.913950  196252 main.go:141] libmachine: (functional-673428) Calling .GetSSHPort
I0729 12:59:54.914122  196252 main.go:141] libmachine: (functional-673428) Calling .GetSSHKeyPath
I0729 12:59:54.914272  196252 main.go:141] libmachine: (functional-673428) Calling .GetSSHUsername
I0729 12:59:54.914423  196252 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-179709/.minikube/machines/functional-673428/id_rsa Username:docker}
I0729 12:59:54.995516  196252 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0729 12:59:55.016245  196252 main.go:141] libmachine: Making call to close driver server
I0729 12:59:55.016259  196252 main.go:141] libmachine: (functional-673428) Calling .Close
I0729 12:59:55.016573  196252 main.go:141] libmachine: Successfully made call to close driver server
I0729 12:59:55.016599  196252 main.go:141] libmachine: (functional-673428) DBG | Closing plugin on server side
I0729 12:59:55.016604  196252 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 12:59:55.016624  196252 main.go:141] libmachine: Making call to close driver server
I0729 12:59:55.016635  196252 main.go:141] libmachine: (functional-673428) Calling .Close
I0729 12:59:55.017067  196252 main.go:141] libmachine: Successfully made call to close driver server
I0729 12:59:55.017082  196252 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-673428 image ls --format yaml --alsologtostderr:
- id: 32a83e8eba577e7cd6a7355527b79987c00be7eab9b72828c2421956eca8509e
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-673428
size: "30"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-673428
size: "4940000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "111000000"
- id: 1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117000000"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "62000000"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "84700000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-673428 image ls --format yaml --alsologtostderr:
I0729 12:59:52.950106  196157 out.go:291] Setting OutFile to fd 1 ...
I0729 12:59:52.950364  196157 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 12:59:52.950376  196157 out.go:304] Setting ErrFile to fd 2...
I0729 12:59:52.950382  196157 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 12:59:52.950590  196157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-179709/.minikube/bin
I0729 12:59:52.951143  196157 config.go:182] Loaded profile config "functional-673428": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 12:59:52.951268  196157 config.go:182] Loaded profile config "functional-673428": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 12:59:52.951660  196157 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0729 12:59:52.951724  196157 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 12:59:52.966841  196157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37063
I0729 12:59:52.967354  196157 main.go:141] libmachine: () Calling .GetVersion
I0729 12:59:52.967922  196157 main.go:141] libmachine: Using API Version  1
I0729 12:59:52.967949  196157 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 12:59:52.968248  196157 main.go:141] libmachine: () Calling .GetMachineName
I0729 12:59:52.968426  196157 main.go:141] libmachine: (functional-673428) Calling .GetState
I0729 12:59:52.970113  196157 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0729 12:59:52.970152  196157 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 12:59:52.984659  196157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33495
I0729 12:59:52.985047  196157 main.go:141] libmachine: () Calling .GetVersion
I0729 12:59:52.985529  196157 main.go:141] libmachine: Using API Version  1
I0729 12:59:52.985550  196157 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 12:59:52.985862  196157 main.go:141] libmachine: () Calling .GetMachineName
I0729 12:59:52.986040  196157 main.go:141] libmachine: (functional-673428) Calling .DriverName
I0729 12:59:52.986235  196157 ssh_runner.go:195] Run: systemctl --version
I0729 12:59:52.986267  196157 main.go:141] libmachine: (functional-673428) Calling .GetSSHHostname
I0729 12:59:52.988780  196157 main.go:141] libmachine: (functional-673428) DBG | domain functional-673428 has defined MAC address 52:54:00:14:7a:2e in network mk-functional-673428
I0729 12:59:52.989247  196157 main.go:141] libmachine: (functional-673428) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:7a:2e", ip: ""} in network mk-functional-673428: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:44 +0000 UTC Type:0 Mac:52:54:00:14:7a:2e Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:functional-673428 Clientid:01:52:54:00:14:7a:2e}
I0729 12:59:52.989272  196157 main.go:141] libmachine: (functional-673428) DBG | domain functional-673428 has defined IP address 192.168.39.244 and MAC address 52:54:00:14:7a:2e in network mk-functional-673428
I0729 12:59:52.989404  196157 main.go:141] libmachine: (functional-673428) Calling .GetSSHPort
I0729 12:59:52.989562  196157 main.go:141] libmachine: (functional-673428) Calling .GetSSHKeyPath
I0729 12:59:52.989679  196157 main.go:141] libmachine: (functional-673428) Calling .GetSSHUsername
I0729 12:59:52.989811  196157 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-179709/.minikube/machines/functional-673428/id_rsa Username:docker}
I0729 12:59:53.072464  196157 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0729 12:59:53.093693  196157 main.go:141] libmachine: Making call to close driver server
I0729 12:59:53.093713  196157 main.go:141] libmachine: (functional-673428) Calling .Close
I0729 12:59:53.093997  196157 main.go:141] libmachine: Successfully made call to close driver server
I0729 12:59:53.094017  196157 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 12:59:53.094026  196157 main.go:141] libmachine: Making call to close driver server
I0729 12:59:53.094033  196157 main.go:141] libmachine: (functional-673428) Calling .Close
I0729 12:59:53.094031  196157 main.go:141] libmachine: (functional-673428) DBG | Closing plugin on server side
I0729 12:59:53.094283  196157 main.go:141] libmachine: Successfully made call to close driver server
I0729 12:59:53.094296  196157 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-673428 ssh pgrep buildkitd: exit status 1 (196.000404ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 image build -t localhost/my-image:functional-673428 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-673428 image build -t localhost/my-image:functional-673428 testdata/build --alsologtostderr: (2.334185818s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-673428 image build -t localhost/my-image:functional-673428 testdata/build --alsologtostderr:
I0729 12:59:53.339711  196211 out.go:291] Setting OutFile to fd 1 ...
I0729 12:59:53.339986  196211 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 12:59:53.339998  196211 out.go:304] Setting ErrFile to fd 2...
I0729 12:59:53.340002  196211 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 12:59:53.340192  196211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-179709/.minikube/bin
I0729 12:59:53.340817  196211 config.go:182] Loaded profile config "functional-673428": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 12:59:53.341463  196211 config.go:182] Loaded profile config "functional-673428": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 12:59:53.341819  196211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0729 12:59:53.341857  196211 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 12:59:53.357492  196211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43843
I0729 12:59:53.358009  196211 main.go:141] libmachine: () Calling .GetVersion
I0729 12:59:53.358595  196211 main.go:141] libmachine: Using API Version  1
I0729 12:59:53.358623  196211 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 12:59:53.358961  196211 main.go:141] libmachine: () Calling .GetMachineName
I0729 12:59:53.359169  196211 main.go:141] libmachine: (functional-673428) Calling .GetState
I0729 12:59:53.361102  196211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0729 12:59:53.361149  196211 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 12:59:53.376380  196211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35749
I0729 12:59:53.376832  196211 main.go:141] libmachine: () Calling .GetVersion
I0729 12:59:53.377373  196211 main.go:141] libmachine: Using API Version  1
I0729 12:59:53.377402  196211 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 12:59:53.377715  196211 main.go:141] libmachine: () Calling .GetMachineName
I0729 12:59:53.377951  196211 main.go:141] libmachine: (functional-673428) Calling .DriverName
I0729 12:59:53.378182  196211 ssh_runner.go:195] Run: systemctl --version
I0729 12:59:53.378212  196211 main.go:141] libmachine: (functional-673428) Calling .GetSSHHostname
I0729 12:59:53.380748  196211 main.go:141] libmachine: (functional-673428) DBG | domain functional-673428 has defined MAC address 52:54:00:14:7a:2e in network mk-functional-673428
I0729 12:59:53.381132  196211 main.go:141] libmachine: (functional-673428) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:7a:2e", ip: ""} in network mk-functional-673428: {Iface:virbr1 ExpiryTime:2024-07-29 13:54:44 +0000 UTC Type:0 Mac:52:54:00:14:7a:2e Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:functional-673428 Clientid:01:52:54:00:14:7a:2e}
I0729 12:59:53.381159  196211 main.go:141] libmachine: (functional-673428) DBG | domain functional-673428 has defined IP address 192.168.39.244 and MAC address 52:54:00:14:7a:2e in network mk-functional-673428
I0729 12:59:53.381304  196211 main.go:141] libmachine: (functional-673428) Calling .GetSSHPort
I0729 12:59:53.381480  196211 main.go:141] libmachine: (functional-673428) Calling .GetSSHKeyPath
I0729 12:59:53.381618  196211 main.go:141] libmachine: (functional-673428) Calling .GetSSHUsername
I0729 12:59:53.381747  196211 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-179709/.minikube/machines/functional-673428/id_rsa Username:docker}
I0729 12:59:53.473072  196211 build_images.go:161] Building image from path: /tmp/build.685495053.tar
I0729 12:59:53.473145  196211 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0729 12:59:53.483373  196211 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.685495053.tar
I0729 12:59:53.487790  196211 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.685495053.tar: stat -c "%s %y" /var/lib/minikube/build/build.685495053.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.685495053.tar': No such file or directory
I0729 12:59:53.487837  196211 ssh_runner.go:362] scp /tmp/build.685495053.tar --> /var/lib/minikube/build/build.685495053.tar (3072 bytes)
I0729 12:59:53.514213  196211 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.685495053
I0729 12:59:53.526544  196211 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.685495053 -xf /var/lib/minikube/build/build.685495053.tar
I0729 12:59:53.538944  196211 docker.go:360] Building image: /var/lib/minikube/build/build.685495053
I0729 12:59:53.539028  196211 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-673428 /var/lib/minikube/build/build.685495053
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:d3254e03439779464a1e30d8cf314fcf6aab52b0246453acd17fef0d21465aa0
#8 writing image sha256:d3254e03439779464a1e30d8cf314fcf6aab52b0246453acd17fef0d21465aa0 done
#8 naming to localhost/my-image:functional-673428 done
#8 DONE 0.1s
I0729 12:59:55.598501  196211 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-673428 /var/lib/minikube/build/build.685495053: (2.059436952s)
I0729 12:59:55.598578  196211 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.685495053
I0729 12:59:55.611891  196211 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.685495053.tar
I0729 12:59:55.623634  196211 build_images.go:217] Built localhost/my-image:functional-673428 from /tmp/build.685495053.tar
I0729 12:59:55.623672  196211 build_images.go:133] succeeded building to: functional-673428
I0729 12:59:55.623678  196211 build_images.go:134] failed building to: 
I0729 12:59:55.623735  196211 main.go:141] libmachine: Making call to close driver server
I0729 12:59:55.623754  196211 main.go:141] libmachine: (functional-673428) Calling .Close
I0729 12:59:55.624001  196211 main.go:141] libmachine: Successfully made call to close driver server
I0729 12:59:55.624023  196211 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 12:59:55.624037  196211 main.go:141] libmachine: Making call to close driver server
I0729 12:59:55.624049  196211 main.go:141] libmachine: (functional-673428) Calling .Close
I0729 12:59:55.625045  196211 main.go:141] libmachine: (functional-673428) DBG | Closing plugin on server side
I0729 12:59:55.625059  196211 main.go:141] libmachine: Successfully made call to close driver server
I0729 12:59:55.625073  196211 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.518415635s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-673428
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-673428 docker-env) && out/minikube-linux-amd64 status -p functional-673428"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-673428 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 image load --daemon docker.io/kicbase/echo-server:functional-673428 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-673428 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-673428 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-673428 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 194480: os: process already finished
helpers_test.go:502: unable to terminate pid 194492: os: process already finished
helpers_test.go:502: unable to terminate pid 194535: os: process already finished
helpers_test.go:508: unable to kill pid 194457: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-673428 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-673428 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (23.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-673428 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [379373e4-51c1-4dfd-b945-8551f46b1274] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [379373e4-51c1-4dfd-b945-8551f46b1274] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 23.004462231s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (23.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 image load --daemon docker.io/kicbase/echo-server:functional-673428 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-673428
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 image load --daemon docker.io/kicbase/echo-server:functional-673428 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 image save docker.io/kicbase/echo-server:functional-673428 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 image rm docker.io/kicbase/echo-server:functional-673428 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-673428
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 image save --daemon docker.io/kicbase/echo-server:functional-673428 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-673428
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-673428 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.85.190 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-673428 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-673428 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-673428 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-n4ws2" [375ed7d0-c957-4637-9ca5-cc8648eee557] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-n4ws2" [375ed7d0-c957-4637-9ca5-cc8648eee557] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003797207s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "215.282207ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "43.987123ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "228.719817ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "54.624387ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-673428 /tmp/TestFunctionalparallelMountCmdany-port1696464119/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722257982658924746" to /tmp/TestFunctionalparallelMountCmdany-port1696464119/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722257982658924746" to /tmp/TestFunctionalparallelMountCmdany-port1696464119/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722257982658924746" to /tmp/TestFunctionalparallelMountCmdany-port1696464119/001/test-1722257982658924746
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-673428 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (237.57785ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 29 12:59 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 29 12:59 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 29 12:59 test-1722257982658924746
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh cat /mount-9p/test-1722257982658924746
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-673428 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [6ab347c0-5a0a-48ef-ae13-30934296f403] Pending
helpers_test.go:344: "busybox-mount" [6ab347c0-5a0a-48ef-ae13-30934296f403] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [6ab347c0-5a0a-48ef-ae13-30934296f403] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [6ab347c0-5a0a-48ef-ae13-30934296f403] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 11.004529051s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-673428 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-673428 /tmp/TestFunctionalparallelMountCmdany-port1696464119/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 service list
functional_test.go:1455: (dbg) Done: out/minikube-linux-amd64 -p functional-673428 service list: (1.630213549s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-linux-amd64 -p functional-673428 service list -o json: (1.644901753s)
functional_test.go:1490: Took "1.645034971s" to run "out/minikube-linux-amd64 -p functional-673428 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.244:32144
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.244:32144
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-673428 /tmp/TestFunctionalparallelMountCmdspecific-port3202959603/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-673428 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (204.085145ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-673428 /tmp/TestFunctionalparallelMountCmdspecific-port3202959603/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-673428 ssh "sudo umount -f /mount-9p": exit status 1 (190.32678ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-673428 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-673428 /tmp/TestFunctionalparallelMountCmdspecific-port3202959603/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-673428 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1526524010/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-673428 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1526524010/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-673428 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1526524010/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-673428 ssh "findmnt -T" /mount1: exit status 1 (245.355024ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-673428 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-673428 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-673428 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1526524010/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-673428 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1526524010/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-673428 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1526524010/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.25s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-673428
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-673428
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-673428
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestGvisorAddon (202.83s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-218750 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-218750 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m13.145991913s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-218750 cache add gcr.io/k8s-minikube/gvisor-addon:2
E0729 13:42:46.201222  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/skaffold-110737/client.crt: no such file or directory
E0729 13:42:46.206553  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/skaffold-110737/client.crt: no such file or directory
E0729 13:42:46.216861  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/skaffold-110737/client.crt: no such file or directory
E0729 13:42:46.237155  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/skaffold-110737/client.crt: no such file or directory
E0729 13:42:46.277512  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/skaffold-110737/client.crt: no such file or directory
E0729 13:42:46.357870  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/skaffold-110737/client.crt: no such file or directory
E0729 13:42:46.518218  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/skaffold-110737/client.crt: no such file or directory
E0729 13:42:46.838725  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/skaffold-110737/client.crt: no such file or directory
E0729 13:42:47.479082  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/skaffold-110737/client.crt: no such file or directory
E0729 13:42:48.760281  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/skaffold-110737/client.crt: no such file or directory
E0729 13:42:51.321106  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/skaffold-110737/client.crt: no such file or directory
E0729 13:42:56.442114  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/skaffold-110737/client.crt: no such file or directory
E0729 13:43:06.683137  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/skaffold-110737/client.crt: no such file or directory
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-218750 cache add gcr.io/k8s-minikube/gvisor-addon:2: (24.177501551s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-218750 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-218750 addons enable gvisor: (5.242988148s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [2546ea31-9017-4102-9f57-066f3b13d8cb] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.005197578s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-218750 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [98cd7ede-6006-4082-aaad-72720cd2bc75] Pending
helpers_test.go:344: "nginx-gvisor" [98cd7ede-6006-4082-aaad-72720cd2bc75] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [98cd7ede-6006-4082-aaad-72720cd2bc75] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 14.00334643s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-218750
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-218750: (6.605024045s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-218750 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-218750 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m1.283344142s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [2546ea31-9017-4102-9f57-066f3b13d8cb] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:344: "gvisor" [2546ea31-9017-4102-9f57-066f3b13d8cb] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004304878s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [98cd7ede-6006-4082-aaad-72720cd2bc75] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.003976186s
helpers_test.go:175: Cleaning up "gvisor-218750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-218750
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-218750: (1.14661498s)
--- PASS: TestGvisorAddon (202.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (224.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-054709 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0729 13:01:12.026715  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
E0729 13:01:39.717153  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-054709 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (3m44.047183375s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (224.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-054709 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-054709 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-054709 -- rollout status deployment/busybox: (3.87664294s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-054709 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-054709 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-054709 -- exec busybox-fc5497c4f-7r5cn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-054709 -- exec busybox-fc5497c4f-bc5nf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-054709 -- exec busybox-fc5497c4f-x8msj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-054709 -- exec busybox-fc5497c4f-7r5cn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-054709 -- exec busybox-fc5497c4f-bc5nf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-054709 -- exec busybox-fc5497c4f-x8msj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-054709 -- exec busybox-fc5497c4f-7r5cn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-054709 -- exec busybox-fc5497c4f-bc5nf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-054709 -- exec busybox-fc5497c4f-x8msj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-054709 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-054709 -- exec busybox-fc5497c4f-7r5cn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-054709 -- exec busybox-fc5497c4f-7r5cn -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-054709 -- exec busybox-fc5497c4f-bc5nf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-054709 -- exec busybox-fc5497c4f-bc5nf -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-054709 -- exec busybox-fc5497c4f-x8msj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-054709 -- exec busybox-fc5497c4f-x8msj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (65.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-054709 -v=7 --alsologtostderr
E0729 13:04:11.734312  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
E0729 13:04:11.739648  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
E0729 13:04:11.749999  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
E0729 13:04:11.770410  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
E0729 13:04:11.810769  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
E0729 13:04:11.891150  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
E0729 13:04:12.051759  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
E0729 13:04:12.372843  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
E0729 13:04:13.013825  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
E0729 13:04:14.294104  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
E0729 13:04:16.854945  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
E0729 13:04:21.976162  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
E0729 13:04:32.217110  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
E0729 13:04:52.697346  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-054709 -v=7 --alsologtostderr: (1m4.751669973s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (65.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-054709 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 cp testdata/cp-test.txt ha-054709:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 cp ha-054709:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3640034752/001/cp-test_ha-054709.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 cp ha-054709:/home/docker/cp-test.txt ha-054709-m02:/home/docker/cp-test_ha-054709_ha-054709-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709-m02 "sudo cat /home/docker/cp-test_ha-054709_ha-054709-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 cp ha-054709:/home/docker/cp-test.txt ha-054709-m03:/home/docker/cp-test_ha-054709_ha-054709-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709-m03 "sudo cat /home/docker/cp-test_ha-054709_ha-054709-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 cp ha-054709:/home/docker/cp-test.txt ha-054709-m04:/home/docker/cp-test_ha-054709_ha-054709-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709-m04 "sudo cat /home/docker/cp-test_ha-054709_ha-054709-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 cp testdata/cp-test.txt ha-054709-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 cp ha-054709-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3640034752/001/cp-test_ha-054709-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 cp ha-054709-m02:/home/docker/cp-test.txt ha-054709:/home/docker/cp-test_ha-054709-m02_ha-054709.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709 "sudo cat /home/docker/cp-test_ha-054709-m02_ha-054709.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 cp ha-054709-m02:/home/docker/cp-test.txt ha-054709-m03:/home/docker/cp-test_ha-054709-m02_ha-054709-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709-m03 "sudo cat /home/docker/cp-test_ha-054709-m02_ha-054709-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 cp ha-054709-m02:/home/docker/cp-test.txt ha-054709-m04:/home/docker/cp-test_ha-054709-m02_ha-054709-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709-m04 "sudo cat /home/docker/cp-test_ha-054709-m02_ha-054709-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 cp testdata/cp-test.txt ha-054709-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 cp ha-054709-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3640034752/001/cp-test_ha-054709-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 cp ha-054709-m03:/home/docker/cp-test.txt ha-054709:/home/docker/cp-test_ha-054709-m03_ha-054709.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709 "sudo cat /home/docker/cp-test_ha-054709-m03_ha-054709.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 cp ha-054709-m03:/home/docker/cp-test.txt ha-054709-m02:/home/docker/cp-test_ha-054709-m03_ha-054709-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709-m02 "sudo cat /home/docker/cp-test_ha-054709-m03_ha-054709-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 cp ha-054709-m03:/home/docker/cp-test.txt ha-054709-m04:/home/docker/cp-test_ha-054709-m03_ha-054709-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709-m04 "sudo cat /home/docker/cp-test_ha-054709-m03_ha-054709-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 cp testdata/cp-test.txt ha-054709-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 cp ha-054709-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3640034752/001/cp-test_ha-054709-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 cp ha-054709-m04:/home/docker/cp-test.txt ha-054709:/home/docker/cp-test_ha-054709-m04_ha-054709.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709 "sudo cat /home/docker/cp-test_ha-054709-m04_ha-054709.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 cp ha-054709-m04:/home/docker/cp-test.txt ha-054709-m02:/home/docker/cp-test_ha-054709-m04_ha-054709-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709-m02 "sudo cat /home/docker/cp-test_ha-054709-m04_ha-054709-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 cp ha-054709-m04:/home/docker/cp-test.txt ha-054709-m03:/home/docker/cp-test_ha-054709-m04_ha-054709-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 ssh -n ha-054709-m03 "sudo cat /home/docker/cp-test_ha-054709-m04_ha-054709-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-054709 node stop m02 -v=7 --alsologtostderr: (12.606043903s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-054709 status -v=7 --alsologtostderr: exit status 7 (614.743426ms)

                                                
                                                
-- stdout --
	ha-054709
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-054709-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-054709-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-054709-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:05:24.907426  201147 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:05:24.907676  201147 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:05:24.907686  201147 out.go:304] Setting ErrFile to fd 2...
	I0729 13:05:24.907692  201147 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:05:24.907917  201147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-179709/.minikube/bin
	I0729 13:05:24.908117  201147 out.go:298] Setting JSON to false
	I0729 13:05:24.908151  201147 mustload.go:65] Loading cluster: ha-054709
	I0729 13:05:24.908253  201147 notify.go:220] Checking for updates...
	I0729 13:05:24.908570  201147 config.go:182] Loaded profile config "ha-054709": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 13:05:24.908589  201147 status.go:255] checking status of ha-054709 ...
	I0729 13:05:24.909046  201147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 13:05:24.909121  201147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:05:24.927282  201147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42519
	I0729 13:05:24.927724  201147 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:05:24.928340  201147 main.go:141] libmachine: Using API Version  1
	I0729 13:05:24.928374  201147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:05:24.928746  201147 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:05:24.928952  201147 main.go:141] libmachine: (ha-054709) Calling .GetState
	I0729 13:05:24.930825  201147 status.go:330] ha-054709 host status = "Running" (err=<nil>)
	I0729 13:05:24.930845  201147 host.go:66] Checking if "ha-054709" exists ...
	I0729 13:05:24.931167  201147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 13:05:24.931203  201147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:05:24.945946  201147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39269
	I0729 13:05:24.946414  201147 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:05:24.947046  201147 main.go:141] libmachine: Using API Version  1
	I0729 13:05:24.947078  201147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:05:24.947391  201147 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:05:24.947580  201147 main.go:141] libmachine: (ha-054709) Calling .GetIP
	I0729 13:05:24.950426  201147 main.go:141] libmachine: (ha-054709) DBG | domain ha-054709 has defined MAC address 52:54:00:7f:d8:b8 in network mk-ha-054709
	I0729 13:05:24.950832  201147 main.go:141] libmachine: (ha-054709) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:d8:b8", ip: ""} in network mk-ha-054709: {Iface:virbr1 ExpiryTime:2024-07-29 14:00:15 +0000 UTC Type:0 Mac:52:54:00:7f:d8:b8 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-054709 Clientid:01:52:54:00:7f:d8:b8}
	I0729 13:05:24.950891  201147 main.go:141] libmachine: (ha-054709) DBG | domain ha-054709 has defined IP address 192.168.39.59 and MAC address 52:54:00:7f:d8:b8 in network mk-ha-054709
	I0729 13:05:24.951057  201147 host.go:66] Checking if "ha-054709" exists ...
	I0729 13:05:24.951342  201147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 13:05:24.951378  201147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:05:24.967786  201147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41855
	I0729 13:05:24.968274  201147 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:05:24.968762  201147 main.go:141] libmachine: Using API Version  1
	I0729 13:05:24.968788  201147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:05:24.969158  201147 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:05:24.969348  201147 main.go:141] libmachine: (ha-054709) Calling .DriverName
	I0729 13:05:24.969507  201147 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:05:24.969532  201147 main.go:141] libmachine: (ha-054709) Calling .GetSSHHostname
	I0729 13:05:24.972350  201147 main.go:141] libmachine: (ha-054709) DBG | domain ha-054709 has defined MAC address 52:54:00:7f:d8:b8 in network mk-ha-054709
	I0729 13:05:24.972758  201147 main.go:141] libmachine: (ha-054709) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:d8:b8", ip: ""} in network mk-ha-054709: {Iface:virbr1 ExpiryTime:2024-07-29 14:00:15 +0000 UTC Type:0 Mac:52:54:00:7f:d8:b8 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-054709 Clientid:01:52:54:00:7f:d8:b8}
	I0729 13:05:24.972789  201147 main.go:141] libmachine: (ha-054709) DBG | domain ha-054709 has defined IP address 192.168.39.59 and MAC address 52:54:00:7f:d8:b8 in network mk-ha-054709
	I0729 13:05:24.972909  201147 main.go:141] libmachine: (ha-054709) Calling .GetSSHPort
	I0729 13:05:24.973135  201147 main.go:141] libmachine: (ha-054709) Calling .GetSSHKeyPath
	I0729 13:05:24.973311  201147 main.go:141] libmachine: (ha-054709) Calling .GetSSHUsername
	I0729 13:05:24.973447  201147 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-179709/.minikube/machines/ha-054709/id_rsa Username:docker}
	I0729 13:05:25.055976  201147 ssh_runner.go:195] Run: systemctl --version
	I0729 13:05:25.062064  201147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:05:25.077948  201147 kubeconfig.go:125] found "ha-054709" server: "https://192.168.39.254:8443"
	I0729 13:05:25.077979  201147 api_server.go:166] Checking apiserver status ...
	I0729 13:05:25.078016  201147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:05:25.091636  201147 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1921/cgroup
	W0729 13:05:25.106620  201147 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1921/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:05:25.106689  201147 ssh_runner.go:195] Run: ls
	I0729 13:05:25.110757  201147 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 13:05:25.114674  201147 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 13:05:25.114702  201147 status.go:422] ha-054709 apiserver status = Running (err=<nil>)
	I0729 13:05:25.114716  201147 status.go:257] ha-054709 status: &{Name:ha-054709 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:05:25.114734  201147 status.go:255] checking status of ha-054709-m02 ...
	I0729 13:05:25.115162  201147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 13:05:25.115193  201147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:05:25.130144  201147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43065
	I0729 13:05:25.130625  201147 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:05:25.131204  201147 main.go:141] libmachine: Using API Version  1
	I0729 13:05:25.131238  201147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:05:25.131584  201147 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:05:25.131801  201147 main.go:141] libmachine: (ha-054709-m02) Calling .GetState
	I0729 13:05:25.133438  201147 status.go:330] ha-054709-m02 host status = "Stopped" (err=<nil>)
	I0729 13:05:25.133455  201147 status.go:343] host is not running, skipping remaining checks
	I0729 13:05:25.133463  201147 status.go:257] ha-054709-m02 status: &{Name:ha-054709-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:05:25.133484  201147 status.go:255] checking status of ha-054709-m03 ...
	I0729 13:05:25.133813  201147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 13:05:25.133860  201147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:05:25.148650  201147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37843
	I0729 13:05:25.149182  201147 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:05:25.149653  201147 main.go:141] libmachine: Using API Version  1
	I0729 13:05:25.149672  201147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:05:25.149995  201147 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:05:25.150235  201147 main.go:141] libmachine: (ha-054709-m03) Calling .GetState
	I0729 13:05:25.151821  201147 status.go:330] ha-054709-m03 host status = "Running" (err=<nil>)
	I0729 13:05:25.151838  201147 host.go:66] Checking if "ha-054709-m03" exists ...
	I0729 13:05:25.152229  201147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 13:05:25.152275  201147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:05:25.168083  201147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44397
	I0729 13:05:25.168598  201147 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:05:25.169111  201147 main.go:141] libmachine: Using API Version  1
	I0729 13:05:25.169133  201147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:05:25.169499  201147 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:05:25.169723  201147 main.go:141] libmachine: (ha-054709-m03) Calling .GetIP
	I0729 13:05:25.172906  201147 main.go:141] libmachine: (ha-054709-m03) DBG | domain ha-054709-m03 has defined MAC address 52:54:00:10:f6:82 in network mk-ha-054709
	I0729 13:05:25.173380  201147 main.go:141] libmachine: (ha-054709-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f6:82", ip: ""} in network mk-ha-054709: {Iface:virbr1 ExpiryTime:2024-07-29 14:02:34 +0000 UTC Type:0 Mac:52:54:00:10:f6:82 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-054709-m03 Clientid:01:52:54:00:10:f6:82}
	I0729 13:05:25.173407  201147 main.go:141] libmachine: (ha-054709-m03) DBG | domain ha-054709-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:10:f6:82 in network mk-ha-054709
	I0729 13:05:25.173567  201147 host.go:66] Checking if "ha-054709-m03" exists ...
	I0729 13:05:25.174016  201147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 13:05:25.174077  201147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:05:25.189932  201147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36269
	I0729 13:05:25.190483  201147 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:05:25.190959  201147 main.go:141] libmachine: Using API Version  1
	I0729 13:05:25.190984  201147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:05:25.191276  201147 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:05:25.191449  201147 main.go:141] libmachine: (ha-054709-m03) Calling .DriverName
	I0729 13:05:25.191603  201147 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:05:25.191626  201147 main.go:141] libmachine: (ha-054709-m03) Calling .GetSSHHostname
	I0729 13:05:25.194366  201147 main.go:141] libmachine: (ha-054709-m03) DBG | domain ha-054709-m03 has defined MAC address 52:54:00:10:f6:82 in network mk-ha-054709
	I0729 13:05:25.194748  201147 main.go:141] libmachine: (ha-054709-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:f6:82", ip: ""} in network mk-ha-054709: {Iface:virbr1 ExpiryTime:2024-07-29 14:02:34 +0000 UTC Type:0 Mac:52:54:00:10:f6:82 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-054709-m03 Clientid:01:52:54:00:10:f6:82}
	I0729 13:05:25.194774  201147 main.go:141] libmachine: (ha-054709-m03) DBG | domain ha-054709-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:10:f6:82 in network mk-ha-054709
	I0729 13:05:25.194936  201147 main.go:141] libmachine: (ha-054709-m03) Calling .GetSSHPort
	I0729 13:05:25.195091  201147 main.go:141] libmachine: (ha-054709-m03) Calling .GetSSHKeyPath
	I0729 13:05:25.195275  201147 main.go:141] libmachine: (ha-054709-m03) Calling .GetSSHUsername
	I0729 13:05:25.195397  201147 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-179709/.minikube/machines/ha-054709-m03/id_rsa Username:docker}
	I0729 13:05:25.273042  201147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:05:25.288352  201147 kubeconfig.go:125] found "ha-054709" server: "https://192.168.39.254:8443"
	I0729 13:05:25.288391  201147 api_server.go:166] Checking apiserver status ...
	I0729 13:05:25.288427  201147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:05:25.302259  201147 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1963/cgroup
	W0729 13:05:25.312443  201147 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1963/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:05:25.312516  201147 ssh_runner.go:195] Run: ls
	I0729 13:05:25.317233  201147 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 13:05:25.321420  201147 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 13:05:25.321448  201147 status.go:422] ha-054709-m03 apiserver status = Running (err=<nil>)
	I0729 13:05:25.321459  201147 status.go:257] ha-054709-m03 status: &{Name:ha-054709-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:05:25.321479  201147 status.go:255] checking status of ha-054709-m04 ...
	I0729 13:05:25.321791  201147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 13:05:25.321822  201147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:05:25.338415  201147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43719
	I0729 13:05:25.338914  201147 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:05:25.339432  201147 main.go:141] libmachine: Using API Version  1
	I0729 13:05:25.339462  201147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:05:25.339752  201147 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:05:25.339962  201147 main.go:141] libmachine: (ha-054709-m04) Calling .GetState
	I0729 13:05:25.341518  201147 status.go:330] ha-054709-m04 host status = "Running" (err=<nil>)
	I0729 13:05:25.341535  201147 host.go:66] Checking if "ha-054709-m04" exists ...
	I0729 13:05:25.341857  201147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 13:05:25.341885  201147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:05:25.357094  201147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46745
	I0729 13:05:25.357554  201147 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:05:25.358084  201147 main.go:141] libmachine: Using API Version  1
	I0729 13:05:25.358107  201147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:05:25.358434  201147 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:05:25.358648  201147 main.go:141] libmachine: (ha-054709-m04) Calling .GetIP
	I0729 13:05:25.361389  201147 main.go:141] libmachine: (ha-054709-m04) DBG | domain ha-054709-m04 has defined MAC address 52:54:00:36:a6:e6 in network mk-ha-054709
	I0729 13:05:25.361807  201147 main.go:141] libmachine: (ha-054709-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:a6:e6", ip: ""} in network mk-ha-054709: {Iface:virbr1 ExpiryTime:2024-07-29 14:04:08 +0000 UTC Type:0 Mac:52:54:00:36:a6:e6 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-054709-m04 Clientid:01:52:54:00:36:a6:e6}
	I0729 13:05:25.361835  201147 main.go:141] libmachine: (ha-054709-m04) DBG | domain ha-054709-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:36:a6:e6 in network mk-ha-054709
	I0729 13:05:25.361997  201147 host.go:66] Checking if "ha-054709-m04" exists ...
	I0729 13:05:25.362297  201147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 13:05:25.362337  201147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:05:25.378246  201147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43225
	I0729 13:05:25.378719  201147 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:05:25.379213  201147 main.go:141] libmachine: Using API Version  1
	I0729 13:05:25.379235  201147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:05:25.379646  201147 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:05:25.379841  201147 main.go:141] libmachine: (ha-054709-m04) Calling .DriverName
	I0729 13:05:25.380043  201147 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:05:25.380063  201147 main.go:141] libmachine: (ha-054709-m04) Calling .GetSSHHostname
	I0729 13:05:25.382628  201147 main.go:141] libmachine: (ha-054709-m04) DBG | domain ha-054709-m04 has defined MAC address 52:54:00:36:a6:e6 in network mk-ha-054709
	I0729 13:05:25.382982  201147 main.go:141] libmachine: (ha-054709-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:a6:e6", ip: ""} in network mk-ha-054709: {Iface:virbr1 ExpiryTime:2024-07-29 14:04:08 +0000 UTC Type:0 Mac:52:54:00:36:a6:e6 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-054709-m04 Clientid:01:52:54:00:36:a6:e6}
	I0729 13:05:25.383021  201147 main.go:141] libmachine: (ha-054709-m04) DBG | domain ha-054709-m04 has defined IP address 192.168.39.250 and MAC address 52:54:00:36:a6:e6 in network mk-ha-054709
	I0729 13:05:25.383177  201147 main.go:141] libmachine: (ha-054709-m04) Calling .GetSSHPort
	I0729 13:05:25.383337  201147 main.go:141] libmachine: (ha-054709-m04) Calling .GetSSHKeyPath
	I0729 13:05:25.383495  201147 main.go:141] libmachine: (ha-054709-m04) Calling .GetSSHUsername
	I0729 13:05:25.383592  201147 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-179709/.minikube/machines/ha-054709-m04/id_rsa Username:docker}
	I0729 13:05:25.463852  201147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:05:25.478549  201147 status.go:257] ha-054709-m04 status: &{Name:ha-054709-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (38.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 node start m02 -v=7 --alsologtostderr
E0729 13:05:33.658920  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-054709 node start m02 -v=7 --alsologtostderr: (37.267298552s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (38.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (303.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-054709 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-054709 -v=7 --alsologtostderr
E0729 13:06:12.026989  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-054709 -v=7 --alsologtostderr: (40.666933037s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-054709 --wait=true -v=7 --alsologtostderr
E0729 13:06:55.579428  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
E0729 13:09:11.734528  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
E0729 13:09:39.421135  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-054709 --wait=true -v=7 --alsologtostderr: (4m22.682831041s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-054709
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (303.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 node delete m03 -v=7 --alsologtostderr
E0729 13:11:12.027352  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-054709 node delete m03 -v=7 --alsologtostderr: (7.022481042s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (38.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-054709 stop -v=7 --alsologtostderr: (38.008801933s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-054709 status -v=7 --alsologtostderr: exit status 7 (103.970353ms)

                                                
                                                
-- stdout --
	ha-054709
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-054709-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-054709-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:11:54.204000  203694 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:11:54.204138  203694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:11:54.204149  203694 out.go:304] Setting ErrFile to fd 2...
	I0729 13:11:54.204154  203694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:11:54.204355  203694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-179709/.minikube/bin
	I0729 13:11:54.204577  203694 out.go:298] Setting JSON to false
	I0729 13:11:54.204607  203694 mustload.go:65] Loading cluster: ha-054709
	I0729 13:11:54.204711  203694 notify.go:220] Checking for updates...
	I0729 13:11:54.205086  203694 config.go:182] Loaded profile config "ha-054709": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 13:11:54.205103  203694 status.go:255] checking status of ha-054709 ...
	I0729 13:11:54.205534  203694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 13:11:54.205597  203694 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:11:54.221347  203694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35645
	I0729 13:11:54.221963  203694 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:11:54.222560  203694 main.go:141] libmachine: Using API Version  1
	I0729 13:11:54.222586  203694 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:11:54.222957  203694 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:11:54.223139  203694 main.go:141] libmachine: (ha-054709) Calling .GetState
	I0729 13:11:54.224741  203694 status.go:330] ha-054709 host status = "Stopped" (err=<nil>)
	I0729 13:11:54.224755  203694 status.go:343] host is not running, skipping remaining checks
	I0729 13:11:54.224761  203694 status.go:257] ha-054709 status: &{Name:ha-054709 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:11:54.224791  203694 status.go:255] checking status of ha-054709-m02 ...
	I0729 13:11:54.225248  203694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 13:11:54.225294  203694 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:11:54.239982  203694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44873
	I0729 13:11:54.240387  203694 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:11:54.241026  203694 main.go:141] libmachine: Using API Version  1
	I0729 13:11:54.241050  203694 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:11:54.241368  203694 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:11:54.241540  203694 main.go:141] libmachine: (ha-054709-m02) Calling .GetState
	I0729 13:11:54.243135  203694 status.go:330] ha-054709-m02 host status = "Stopped" (err=<nil>)
	I0729 13:11:54.243153  203694 status.go:343] host is not running, skipping remaining checks
	I0729 13:11:54.243161  203694 status.go:257] ha-054709-m02 status: &{Name:ha-054709-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:11:54.243182  203694 status.go:255] checking status of ha-054709-m04 ...
	I0729 13:11:54.243457  203694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 13:11:54.243491  203694 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:11:54.260305  203694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34295
	I0729 13:11:54.260836  203694 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:11:54.261456  203694 main.go:141] libmachine: Using API Version  1
	I0729 13:11:54.261485  203694 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:11:54.261803  203694 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:11:54.261989  203694 main.go:141] libmachine: (ha-054709-m04) Calling .GetState
	I0729 13:11:54.263657  203694 status.go:330] ha-054709-m04 host status = "Stopped" (err=<nil>)
	I0729 13:11:54.263689  203694 status.go:343] host is not running, skipping remaining checks
	I0729 13:11:54.263700  203694 status.go:257] ha-054709-m04 status: &{Name:ha-054709-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (38.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (122.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-054709 --wait=true -v=7 --alsologtostderr --driver=kvm2 
E0729 13:12:35.078211  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-054709 --wait=true -v=7 --alsologtostderr --driver=kvm2 : (2m2.054258121s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (122.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (82.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-054709 --control-plane -v=7 --alsologtostderr
E0729 13:14:11.734068  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-054709 --control-plane -v=7 --alsologtostderr: (1m21.560939408s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-054709 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (82.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (53.13s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-658770 --driver=kvm2 
E0729 13:16:12.026803  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-658770 --driver=kvm2 : (53.132266488s)
--- PASS: TestImageBuild/serial/Setup (53.13s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-658770
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-658770: (2.00366529s)
--- PASS: TestImageBuild/serial/NormalBuild (2.00s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-658770
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-658770: (1.020919926s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.02s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.74s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-658770
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.74s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-658770
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.77s)

                                                
                                    
x
+
TestJSONOutput/start/Command (65.81s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-558435 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-558435 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m5.813274827s)
--- PASS: TestJSONOutput/start/Command (65.81s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-558435 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-558435 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.66s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-558435 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-558435 --output=json --user=testUser: (12.659887305s)
--- PASS: TestJSONOutput/stop/Command (12.66s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-966089 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-966089 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.123556ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"aa5f5d7a-f73b-46bd-b6e7-3b157138e057","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-966089] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8104c19c-e312-4d63-a378-a675d947ae62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19338"}}
	{"specversion":"1.0","id":"a62b7816-12af-4802-b37b-7e1c48d1c9d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"263cd82f-6306-49a3-8381-9547e62e8a2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19338-179709/kubeconfig"}}
	{"specversion":"1.0","id":"3011fd52-a15a-45a4-8e5e-0fefe4f87850","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-179709/.minikube"}}
	{"specversion":"1.0","id":"bb616e26-0346-4598-a22a-a4ca12a62885","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"aa5b3b41-bac1-4ee5-b775-90cf646ba932","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d7aaa62e-b361-4cd7-b1bd-c1f6068c2f3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-966089" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-966089
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (103.12s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-272310 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-272310 --driver=kvm2 : (49.114891536s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-279330 --driver=kvm2 
E0729 13:19:11.733606  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-279330 --driver=kvm2 : (51.182936622s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-272310
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-279330
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-279330" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-279330
helpers_test.go:175: Cleaning up "first-272310" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-272310
--- PASS: TestMinikubeProfile (103.12s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (30.79s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-657358 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-657358 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (29.789597685s)
--- PASS: TestMountStart/serial/StartWithMountFirst (30.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-657358 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-657358 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.79s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-677841 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-677841 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (26.794013341s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-677841 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-677841 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-657358 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-677841 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-677841 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-677841
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-677841: (2.277211231s)
--- PASS: TestMountStart/serial/Stop (2.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.08s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-677841
E0729 13:20:34.782207  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-677841: (25.079420804s)
--- PASS: TestMountStart/serial/RestartStopped (26.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-677841 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-677841 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (137.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-665381 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0729 13:21:12.026451  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-665381 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m16.998246178s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (137.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665381 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665381 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-665381 -- rollout status deployment/busybox: (2.444169312s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665381 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665381 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665381 -- exec busybox-fc5497c4f-57845 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665381 -- exec busybox-fc5497c4f-mppk4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665381 -- exec busybox-fc5497c4f-57845 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665381 -- exec busybox-fc5497c4f-mppk4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665381 -- exec busybox-fc5497c4f-57845 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665381 -- exec busybox-fc5497c4f-mppk4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.01s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665381 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665381 -- exec busybox-fc5497c4f-57845 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665381 -- exec busybox-fc5497c4f-57845 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665381 -- exec busybox-fc5497c4f-mppk4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665381 -- exec busybox-fc5497c4f-mppk4 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-665381 -v 3 --alsologtostderr
E0729 13:24:11.734212  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-665381 -v 3 --alsologtostderr: (53.809379805s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (54.39s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-665381 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 cp testdata/cp-test.txt multinode-665381:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 ssh -n multinode-665381 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 cp multinode-665381:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1336525199/001/cp-test_multinode-665381.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 ssh -n multinode-665381 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 cp multinode-665381:/home/docker/cp-test.txt multinode-665381-m02:/home/docker/cp-test_multinode-665381_multinode-665381-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 ssh -n multinode-665381 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 ssh -n multinode-665381-m02 "sudo cat /home/docker/cp-test_multinode-665381_multinode-665381-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 cp multinode-665381:/home/docker/cp-test.txt multinode-665381-m03:/home/docker/cp-test_multinode-665381_multinode-665381-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 ssh -n multinode-665381 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 ssh -n multinode-665381-m03 "sudo cat /home/docker/cp-test_multinode-665381_multinode-665381-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 cp testdata/cp-test.txt multinode-665381-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 ssh -n multinode-665381-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 cp multinode-665381-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1336525199/001/cp-test_multinode-665381-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 ssh -n multinode-665381-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 cp multinode-665381-m02:/home/docker/cp-test.txt multinode-665381:/home/docker/cp-test_multinode-665381-m02_multinode-665381.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 ssh -n multinode-665381-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 ssh -n multinode-665381 "sudo cat /home/docker/cp-test_multinode-665381-m02_multinode-665381.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 cp multinode-665381-m02:/home/docker/cp-test.txt multinode-665381-m03:/home/docker/cp-test_multinode-665381-m02_multinode-665381-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 ssh -n multinode-665381-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 ssh -n multinode-665381-m03 "sudo cat /home/docker/cp-test_multinode-665381-m02_multinode-665381-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 cp testdata/cp-test.txt multinode-665381-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 ssh -n multinode-665381-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 cp multinode-665381-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1336525199/001/cp-test_multinode-665381-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 ssh -n multinode-665381-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 cp multinode-665381-m03:/home/docker/cp-test.txt multinode-665381:/home/docker/cp-test_multinode-665381-m03_multinode-665381.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 ssh -n multinode-665381-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 ssh -n multinode-665381 "sudo cat /home/docker/cp-test_multinode-665381-m03_multinode-665381.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 cp multinode-665381-m03:/home/docker/cp-test.txt multinode-665381-m02:/home/docker/cp-test_multinode-665381-m03_multinode-665381-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 ssh -n multinode-665381-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 ssh -n multinode-665381-m02 "sudo cat /home/docker/cp-test_multinode-665381-m03_multinode-665381-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-665381 node stop m03: (2.507741551s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-665381 status: exit status 7 (439.099133ms)

                                                
                                                
-- stdout --
	multinode-665381
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-665381-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-665381-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-665381 status --alsologtostderr: exit status 7 (443.897201ms)

                                                
                                                
-- stdout --
	multinode-665381
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-665381-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-665381-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:24:22.878568  212443 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:24:22.878688  212443 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:24:22.878698  212443 out.go:304] Setting ErrFile to fd 2...
	I0729 13:24:22.878704  212443 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:24:22.878943  212443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-179709/.minikube/bin
	I0729 13:24:22.879121  212443 out.go:298] Setting JSON to false
	I0729 13:24:22.879153  212443 mustload.go:65] Loading cluster: multinode-665381
	I0729 13:24:22.879264  212443 notify.go:220] Checking for updates...
	I0729 13:24:22.879688  212443 config.go:182] Loaded profile config "multinode-665381": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 13:24:22.879706  212443 status.go:255] checking status of multinode-665381 ...
	I0729 13:24:22.880121  212443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 13:24:22.880177  212443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:24:22.898168  212443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41021
	I0729 13:24:22.898656  212443 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:24:22.899235  212443 main.go:141] libmachine: Using API Version  1
	I0729 13:24:22.899259  212443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:24:22.899735  212443 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:24:22.900023  212443 main.go:141] libmachine: (multinode-665381) Calling .GetState
	I0729 13:24:22.901770  212443 status.go:330] multinode-665381 host status = "Running" (err=<nil>)
	I0729 13:24:22.901793  212443 host.go:66] Checking if "multinode-665381" exists ...
	I0729 13:24:22.902168  212443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 13:24:22.902216  212443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:24:22.918320  212443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36321
	I0729 13:24:22.918992  212443 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:24:22.919510  212443 main.go:141] libmachine: Using API Version  1
	I0729 13:24:22.919537  212443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:24:22.919885  212443 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:24:22.920121  212443 main.go:141] libmachine: (multinode-665381) Calling .GetIP
	I0729 13:24:22.923610  212443 main.go:141] libmachine: (multinode-665381) DBG | domain multinode-665381 has defined MAC address 52:54:00:85:a3:ac in network mk-multinode-665381
	I0729 13:24:22.924212  212443 main.go:141] libmachine: (multinode-665381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:a3:ac", ip: ""} in network mk-multinode-665381: {Iface:virbr1 ExpiryTime:2024-07-29 14:21:09 +0000 UTC Type:0 Mac:52:54:00:85:a3:ac Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-665381 Clientid:01:52:54:00:85:a3:ac}
	I0729 13:24:22.924247  212443 main.go:141] libmachine: (multinode-665381) DBG | domain multinode-665381 has defined IP address 192.168.39.18 and MAC address 52:54:00:85:a3:ac in network mk-multinode-665381
	I0729 13:24:22.924396  212443 host.go:66] Checking if "multinode-665381" exists ...
	I0729 13:24:22.924850  212443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 13:24:22.924912  212443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:24:22.945935  212443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32837
	I0729 13:24:22.946313  212443 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:24:22.946880  212443 main.go:141] libmachine: Using API Version  1
	I0729 13:24:22.946906  212443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:24:22.947224  212443 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:24:22.947464  212443 main.go:141] libmachine: (multinode-665381) Calling .DriverName
	I0729 13:24:22.947678  212443 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:24:22.947718  212443 main.go:141] libmachine: (multinode-665381) Calling .GetSSHHostname
	I0729 13:24:22.950576  212443 main.go:141] libmachine: (multinode-665381) DBG | domain multinode-665381 has defined MAC address 52:54:00:85:a3:ac in network mk-multinode-665381
	I0729 13:24:22.951109  212443 main.go:141] libmachine: (multinode-665381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:a3:ac", ip: ""} in network mk-multinode-665381: {Iface:virbr1 ExpiryTime:2024-07-29 14:21:09 +0000 UTC Type:0 Mac:52:54:00:85:a3:ac Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-665381 Clientid:01:52:54:00:85:a3:ac}
	I0729 13:24:22.951137  212443 main.go:141] libmachine: (multinode-665381) DBG | domain multinode-665381 has defined IP address 192.168.39.18 and MAC address 52:54:00:85:a3:ac in network mk-multinode-665381
	I0729 13:24:22.951369  212443 main.go:141] libmachine: (multinode-665381) Calling .GetSSHPort
	I0729 13:24:22.951568  212443 main.go:141] libmachine: (multinode-665381) Calling .GetSSHKeyPath
	I0729 13:24:22.951720  212443 main.go:141] libmachine: (multinode-665381) Calling .GetSSHUsername
	I0729 13:24:22.951854  212443 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-179709/.minikube/machines/multinode-665381/id_rsa Username:docker}
	I0729 13:24:23.040764  212443 ssh_runner.go:195] Run: systemctl --version
	I0729 13:24:23.046834  212443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:24:23.062154  212443 kubeconfig.go:125] found "multinode-665381" server: "https://192.168.39.18:8443"
	I0729 13:24:23.062186  212443 api_server.go:166] Checking apiserver status ...
	I0729 13:24:23.062221  212443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 13:24:23.078679  212443 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1962/cgroup
	W0729 13:24:23.088374  212443 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1962/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 13:24:23.088440  212443 ssh_runner.go:195] Run: ls
	I0729 13:24:23.094104  212443 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0729 13:24:23.098309  212443 api_server.go:279] https://192.168.39.18:8443/healthz returned 200:
	ok
	I0729 13:24:23.098338  212443 status.go:422] multinode-665381 apiserver status = Running (err=<nil>)
	I0729 13:24:23.098349  212443 status.go:257] multinode-665381 status: &{Name:multinode-665381 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:24:23.098370  212443 status.go:255] checking status of multinode-665381-m02 ...
	I0729 13:24:23.098667  212443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 13:24:23.098701  212443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:24:23.114154  212443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45569
	I0729 13:24:23.114640  212443 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:24:23.115150  212443 main.go:141] libmachine: Using API Version  1
	I0729 13:24:23.115176  212443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:24:23.115505  212443 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:24:23.115688  212443 main.go:141] libmachine: (multinode-665381-m02) Calling .GetState
	I0729 13:24:23.117408  212443 status.go:330] multinode-665381-m02 host status = "Running" (err=<nil>)
	I0729 13:24:23.117427  212443 host.go:66] Checking if "multinode-665381-m02" exists ...
	I0729 13:24:23.117810  212443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 13:24:23.117858  212443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:24:23.133710  212443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33573
	I0729 13:24:23.134172  212443 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:24:23.134668  212443 main.go:141] libmachine: Using API Version  1
	I0729 13:24:23.134689  212443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:24:23.135018  212443 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:24:23.135220  212443 main.go:141] libmachine: (multinode-665381-m02) Calling .GetIP
	I0729 13:24:23.137803  212443 main.go:141] libmachine: (multinode-665381-m02) DBG | domain multinode-665381-m02 has defined MAC address 52:54:00:c5:a4:0d in network mk-multinode-665381
	I0729 13:24:23.138178  212443 main.go:141] libmachine: (multinode-665381-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a4:0d", ip: ""} in network mk-multinode-665381: {Iface:virbr1 ExpiryTime:2024-07-29 14:22:28 +0000 UTC Type:0 Mac:52:54:00:c5:a4:0d Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:multinode-665381-m02 Clientid:01:52:54:00:c5:a4:0d}
	I0729 13:24:23.138207  212443 main.go:141] libmachine: (multinode-665381-m02) DBG | domain multinode-665381-m02 has defined IP address 192.168.39.190 and MAC address 52:54:00:c5:a4:0d in network mk-multinode-665381
	I0729 13:24:23.138349  212443 host.go:66] Checking if "multinode-665381-m02" exists ...
	I0729 13:24:23.138744  212443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 13:24:23.138796  212443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:24:23.154781  212443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36287
	I0729 13:24:23.155222  212443 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:24:23.155722  212443 main.go:141] libmachine: Using API Version  1
	I0729 13:24:23.155749  212443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:24:23.156069  212443 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:24:23.156278  212443 main.go:141] libmachine: (multinode-665381-m02) Calling .DriverName
	I0729 13:24:23.156455  212443 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 13:24:23.156481  212443 main.go:141] libmachine: (multinode-665381-m02) Calling .GetSSHHostname
	I0729 13:24:23.159635  212443 main.go:141] libmachine: (multinode-665381-m02) DBG | domain multinode-665381-m02 has defined MAC address 52:54:00:c5:a4:0d in network mk-multinode-665381
	I0729 13:24:23.160097  212443 main.go:141] libmachine: (multinode-665381-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a4:0d", ip: ""} in network mk-multinode-665381: {Iface:virbr1 ExpiryTime:2024-07-29 14:22:28 +0000 UTC Type:0 Mac:52:54:00:c5:a4:0d Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:multinode-665381-m02 Clientid:01:52:54:00:c5:a4:0d}
	I0729 13:24:23.160126  212443 main.go:141] libmachine: (multinode-665381-m02) DBG | domain multinode-665381-m02 has defined IP address 192.168.39.190 and MAC address 52:54:00:c5:a4:0d in network mk-multinode-665381
	I0729 13:24:23.160349  212443 main.go:141] libmachine: (multinode-665381-m02) Calling .GetSSHPort
	I0729 13:24:23.160514  212443 main.go:141] libmachine: (multinode-665381-m02) Calling .GetSSHKeyPath
	I0729 13:24:23.160649  212443 main.go:141] libmachine: (multinode-665381-m02) Calling .GetSSHUsername
	I0729 13:24:23.160793  212443 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19338-179709/.minikube/machines/multinode-665381-m02/id_rsa Username:docker}
	I0729 13:24:23.244339  212443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 13:24:23.259140  212443 status.go:257] multinode-665381-m02 status: &{Name:multinode-665381-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:24:23.259176  212443 status.go:255] checking status of multinode-665381-m03 ...
	I0729 13:24:23.259559  212443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 13:24:23.259590  212443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:24:23.275372  212443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44597
	I0729 13:24:23.275860  212443 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:24:23.276331  212443 main.go:141] libmachine: Using API Version  1
	I0729 13:24:23.276354  212443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:24:23.276717  212443 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:24:23.276934  212443 main.go:141] libmachine: (multinode-665381-m03) Calling .GetState
	I0729 13:24:23.278746  212443 status.go:330] multinode-665381-m03 host status = "Stopped" (err=<nil>)
	I0729 13:24:23.278764  212443 status.go:343] host is not running, skipping remaining checks
	I0729 13:24:23.278771  212443 status.go:257] multinode-665381-m03 status: &{Name:multinode-665381-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-665381 node start m03 -v=7 --alsologtostderr: (41.581012101s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (42.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (172.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-665381
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-665381
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-665381: (27.183470636s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-665381 --wait=true -v=8 --alsologtostderr
E0729 13:26:12.027250  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-665381 --wait=true -v=8 --alsologtostderr: (2m25.047609212s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-665381
--- PASS: TestMultiNode/serial/RestartKeepsNodes (172.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-665381 node delete m03: (1.621565007s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-665381 stop: (25.63871961s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-665381 status: exit status 7 (84.060544ms)

                                                
                                                
-- stdout --
	multinode-665381
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-665381-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-665381 status --alsologtostderr: exit status 7 (85.416853ms)

                                                
                                                
-- stdout --
	multinode-665381
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-665381-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 13:28:25.736441  214175 out.go:291] Setting OutFile to fd 1 ...
	I0729 13:28:25.736703  214175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:28:25.736713  214175 out.go:304] Setting ErrFile to fd 2...
	I0729 13:28:25.736720  214175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 13:28:25.736925  214175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19338-179709/.minikube/bin
	I0729 13:28:25.737117  214175 out.go:298] Setting JSON to false
	I0729 13:28:25.737151  214175 mustload.go:65] Loading cluster: multinode-665381
	I0729 13:28:25.737259  214175 notify.go:220] Checking for updates...
	I0729 13:28:25.737576  214175 config.go:182] Loaded profile config "multinode-665381": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 13:28:25.737594  214175 status.go:255] checking status of multinode-665381 ...
	I0729 13:28:25.737987  214175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 13:28:25.738036  214175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:28:25.756946  214175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39589
	I0729 13:28:25.757451  214175 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:28:25.758136  214175 main.go:141] libmachine: Using API Version  1
	I0729 13:28:25.758164  214175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:28:25.758659  214175 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:28:25.758901  214175 main.go:141] libmachine: (multinode-665381) Calling .GetState
	I0729 13:28:25.760593  214175 status.go:330] multinode-665381 host status = "Stopped" (err=<nil>)
	I0729 13:28:25.760607  214175 status.go:343] host is not running, skipping remaining checks
	I0729 13:28:25.760613  214175 status.go:257] multinode-665381 status: &{Name:multinode-665381 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 13:28:25.760663  214175 status.go:255] checking status of multinode-665381-m02 ...
	I0729 13:28:25.760940  214175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0729 13:28:25.761000  214175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 13:28:25.777041  214175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43997
	I0729 13:28:25.777550  214175 main.go:141] libmachine: () Calling .GetVersion
	I0729 13:28:25.778022  214175 main.go:141] libmachine: Using API Version  1
	I0729 13:28:25.778047  214175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 13:28:25.778350  214175 main.go:141] libmachine: () Calling .GetMachineName
	I0729 13:28:25.778565  214175 main.go:141] libmachine: (multinode-665381-m02) Calling .GetState
	I0729 13:28:25.780279  214175 status.go:330] multinode-665381-m02 host status = "Stopped" (err=<nil>)
	I0729 13:28:25.780296  214175 status.go:343] host is not running, skipping remaining checks
	I0729 13:28:25.780303  214175 status.go:257] multinode-665381-m02 status: &{Name:multinode-665381-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (115.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-665381 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0729 13:29:11.733952  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
E0729 13:29:15.079284  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-665381 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m55.41829177s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665381 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (115.94s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (51.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-665381
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-665381-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-665381-m02 --driver=kvm2 : exit status 14 (63.336886ms)

                                                
                                                
-- stdout --
	* [multinode-665381-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19338
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19338-179709/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-179709/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-665381-m02' is duplicated with machine name 'multinode-665381-m02' in profile 'multinode-665381'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-665381-m03 --driver=kvm2 
E0729 13:31:12.027342  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-665381-m03 --driver=kvm2 : (50.809374747s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-665381
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-665381: exit status 80 (206.473857ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-665381 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-665381-m03 already exists in multinode-665381-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-665381-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (51.91s)

                                                
                                    
x
+
TestPreload (153.55s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-442809 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-442809 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m28.306733146s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-442809 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-442809 image pull gcr.io/k8s-minikube/busybox: (1.227774755s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-442809
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-442809: (12.447946361s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-442809 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-442809 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (50.30532308s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-442809 image list
helpers_test.go:175: Cleaning up "test-preload-442809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-442809
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-442809: (1.068434939s)
--- PASS: TestPreload (153.55s)

                                                
                                    
x
+
TestScheduledStopUnix (122.69s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-459852 --memory=2048 --driver=kvm2 
E0729 13:34:11.733693  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-459852 --memory=2048 --driver=kvm2 : (51.068336101s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-459852 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-459852 -n scheduled-stop-459852
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-459852 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-459852 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-459852 -n scheduled-stop-459852
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-459852
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-459852 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-459852
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-459852: exit status 7 (64.843461ms)

                                                
                                                
-- stdout --
	scheduled-stop-459852
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-459852 -n scheduled-stop-459852
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-459852 -n scheduled-stop-459852: exit status 7 (65.081764ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-459852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-459852
--- PASS: TestScheduledStopUnix (122.69s)

                                                
                                    
x
+
TestSkaffold (125.77s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1862004734 version
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-110737 --memory=2600 --driver=kvm2 
E0729 13:36:12.027451  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-110737 --memory=2600 --driver=kvm2 : (47.12463001s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1862004734 run --minikube-profile skaffold-110737 --kube-context skaffold-110737 --status-check=true --port-forward=false --interactive=false
E0729 13:37:14.782587  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1862004734 run --minikube-profile skaffold-110737 --kube-context skaffold-110737 --status-check=true --port-forward=false --interactive=false: (1m6.713164554s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-79cd6876dd-vw5km" [da9855cf-156f-482c-962d-31dd3cd970ec] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.008060061s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-85797ddfd-8sfjr" [46903914-f5c5-4c95-a3ab-e3b66db6ef47] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004097857s
helpers_test.go:175: Cleaning up "skaffold-110737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-110737
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-110737: (1.182097122s)
--- PASS: TestSkaffold (125.77s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (200.29s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.78731940 start -p running-upgrade-910356 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.78731940 start -p running-upgrade-910356 --memory=2200 --vm-driver=kvm2 : (2m16.846063689s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-910356 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-910356 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m1.815030072s)
helpers_test.go:175: Cleaning up "running-upgrade-910356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-910356
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-910356: (1.217173575s)
--- PASS: TestRunningBinaryUpgrade (200.29s)

                                                
                                    
x
+
TestKubernetesUpgrade (215.01s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-499114 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-499114 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (1m49.774679373s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-499114
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-499114: (13.313749962s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-499114 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-499114 status --format={{.Host}}: exit status 7 (96.7869ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-499114 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-499114 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2 : (44.659495436s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-499114 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-499114 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-499114 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (92.534425ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-499114] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19338
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19338-179709/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-179709/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-499114
	    minikube start -p kubernetes-upgrade-499114 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4991142 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-499114 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-499114 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-499114 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2 : (45.912444397s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-499114" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-499114
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-499114: (1.092977282s)
--- PASS: TestKubernetesUpgrade (215.01s)

                                                
                                    
x
+
TestPause/serial/Start (107.07s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-996381 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-996381 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m47.069946552s)
--- PASS: TestPause/serial/Start (107.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (140.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2703820835 start -p stopped-upgrade-664768 --memory=2200 --vm-driver=kvm2 
E0729 13:39:11.734036  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2703820835 start -p stopped-upgrade-664768 --memory=2200 --vm-driver=kvm2 : (1m25.111008617s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2703820835 -p stopped-upgrade-664768 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2703820835 -p stopped-upgrade-664768 stop: (12.597478725s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-664768 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-664768 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (43.133667533s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (140.84s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (56.23s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-996381 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-996381 --alsologtostderr -v=1 --driver=kvm2 : (56.212147232s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (56.23s)

                                                
                                    
x
+
TestPause/serial/Pause (0.59s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-996381 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.59s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-996381 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-996381 --output=json --layout=cluster: exit status 2 (244.871678ms)

                                                
                                                
-- stdout --
	{"Name":"pause-996381","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-996381","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.55s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-996381 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.55s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.71s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-996381 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.71s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.02s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-996381 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-996381 --alsologtostderr -v=5: (1.024324051s)
--- PASS: TestPause/serial/DeletePaused (1.02s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (17.66s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (17.654991061s)
--- PASS: TestPause/serial/VerifyDeletedResources (17.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-782043 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-782043 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (63.903665ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-782043] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19338
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19338-179709/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19338-179709/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (58.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-782043 --driver=kvm2 
E0729 13:41:12.027053  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-782043 --driver=kvm2 : (58.095316784s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-782043 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (58.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-664768
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-664768: (1.097591827s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (69.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-782043 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-782043 --no-kubernetes --driver=kvm2 : (1m8.299645385s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-782043 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-782043 status -o json: exit status 2 (246.55387ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-782043","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-782043
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-782043: (1.024755652s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (69.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (40.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-782043 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-782043 --no-kubernetes --driver=kvm2 : (40.541862661s)
--- PASS: TestNoKubernetes/serial/Start (40.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-782043 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-782043 "sudo systemctl is-active --quiet service kubelet": exit status 1 (202.158024ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-782043
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-782043: (2.469983615s)
--- PASS: TestNoKubernetes/serial/Stop (2.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (62.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-782043 --driver=kvm2 
E0729 13:44:08.124473  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/skaffold-110737/client.crt: no such file or directory
E0729 13:44:11.734254  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-782043 --driver=kvm2 : (1m2.173737585s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (62.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (92.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-263785 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-263785 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m32.630317318s)
--- PASS: TestNetworkPlugins/group/auto/Start (92.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-782043 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-782043 "sudo systemctl is-active --quiet service kubelet": exit status 1 (196.28619ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (138.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-263785 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
E0729 13:45:30.045144  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/skaffold-110737/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-263785 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (2m18.034409226s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (138.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (123.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-263785 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
E0729 13:45:55.079916  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
E0729 13:46:12.026959  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-263785 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m3.884620059s)
--- PASS: TestNetworkPlugins/group/calico/Start (123.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-263785 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-263785 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-jv7r4" [b9cf42c4-66dd-4af7-8916-1837ddd40116] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-jv7r4" [b9cf42c4-66dd-4af7-8916-1837ddd40116] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004046178s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-263785 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-263785 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-263785 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (82.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-263785 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-263785 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m22.613581668s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (82.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-hm5xx" [487aff61-affb-4df6-bd70-25a50f08c7b1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005074786s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-263785 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-263785 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-57bjj" [d82a34e7-7338-4410-8928-29670ff472ab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-57bjj" [d82a34e7-7338-4410-8928-29670ff472ab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.006051161s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (83.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-263785 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-263785 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m23.895563296s)
--- PASS: TestNetworkPlugins/group/false/Start (83.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-263785 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-263785 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-263785 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (86.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-263785 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-263785 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m26.944339813s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (86.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-wkcqf" [ee5d3cad-bfa5-429e-ae5b-10ef68b660ac] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006837598s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-263785 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-263785 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-v65bk" [4337bc2b-dfd0-4bb6-a870-9770559758c5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-v65bk" [4337bc2b-dfd0-4bb6-a870-9770559758c5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 15.005087851s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-263785 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-263785 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-263785 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-263785 "pgrep -a kubelet"
E0729 13:48:19.676301  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/gvisor-218750/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-263785 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-fkp5s" [360911ac-c986-467c-a5d5-db4f58e9aaeb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0729 13:48:24.797079  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/gvisor-218750/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-fkp5s" [360911ac-c986-467c-a5d5-db4f58e9aaeb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004464365s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-263785 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-263785 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-263785 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (82.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-263785 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
E0729 13:48:35.037577  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/gvisor-218750/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-263785 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m22.883431829s)
--- PASS: TestNetworkPlugins/group/flannel/Start (82.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-263785 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-263785 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tkbzz" [f47d0788-7dc7-48ad-a291-2a094112abd8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tkbzz" [f47d0788-7dc7-48ad-a291-2a094112abd8] Running
E0729 13:48:55.518001  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/gvisor-218750/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.003702447s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (91.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-263785 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-263785 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m31.549482476s)
--- PASS: TestNetworkPlugins/group/bridge/Start (91.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-263785 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-263785 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-263785 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (86.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-263785 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-263785 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m26.917159196s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (86.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-263785 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-263785 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-gxfsw" [ceabe858-dcaf-4747-aa14-cb514a72cfae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-gxfsw" [ceabe858-dcaf-4747-aa14-cb514a72cfae] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005340917s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-263785 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-263785 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-263785 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (179.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-436965 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-436965 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (2m59.180929356s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (179.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-qdjfp" [ed747b65-2de3-4bb5-953f-b267f79d8b5f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.014465993s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-263785 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-263785 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-cqmst" [f4ca7b69-f058-4dab-80e8-589f99478a42] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-cqmst" [f4ca7b69-f058-4dab-80e8-589f99478a42] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.003999118s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-263785 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-263785 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-263785 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-263785 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-263785 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-z6k8z" [7784c0fc-602f-4384-a765-1fa4580d9d74] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-z6k8z" [7784c0fc-602f-4384-a765-1fa4580d9d74] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004744139s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-263785 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-263785 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-263785 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-263785 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-263785 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-jbk65" [35804195-19a0-4da3-8274-d3ef2053bb1a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-jbk65" [35804195-19a0-4da3-8274-d3ef2053bb1a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.004863485s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (82.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-918148 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-918148 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.3: (1m22.460305413s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (82.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-263785 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-263785 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-263785 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.13s)
E0729 13:57:37.760322  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/flannel-263785/client.crt: no such file or directory
E0729 13:57:42.565930  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kindnet-263785/client.crt: no such file or directory
E0729 13:57:46.201810  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/skaffold-110737/client.crt: no such file or directory
E0729 13:57:51.505029  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/calico-263785/client.crt: no such file or directory
E0729 13:58:04.789823  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/bridge-263785/client.crt: no such file or directory
E0729 13:58:14.556336  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/gvisor-218750/client.crt: no such file or directory
E0729 13:58:19.188885  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/calico-263785/client.crt: no such file or directory
E0729 13:58:20.028275  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/custom-flannel-263785/client.crt: no such file or directory
E0729 13:58:26.541326  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kubenet-263785/client.crt: no such file or directory
E0729 13:58:45.686828  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/false-263785/client.crt: no such file or directory
E0729 13:58:47.714197  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/custom-flannel-263785/client.crt: no such file or directory
E0729 13:59:09.246092  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/skaffold-110737/client.crt: no such file or directory
E0729 13:59:11.733725  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
E0729 13:59:13.369687  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/false-263785/client.crt: no such file or directory
E0729 13:59:16.128167  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/enable-default-cni-263785/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (92.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-255171 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.3
E0729 13:51:12.026868  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
E0729 13:51:27.876139  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/auto-263785/client.crt: no such file or directory
E0729 13:51:27.881421  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/auto-263785/client.crt: no such file or directory
E0729 13:51:27.891705  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/auto-263785/client.crt: no such file or directory
E0729 13:51:27.911967  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/auto-263785/client.crt: no such file or directory
E0729 13:51:27.952313  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/auto-263785/client.crt: no such file or directory
E0729 13:51:28.033298  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/auto-263785/client.crt: no such file or directory
E0729 13:51:28.193789  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/auto-263785/client.crt: no such file or directory
E0729 13:51:28.514733  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/auto-263785/client.crt: no such file or directory
E0729 13:51:29.155444  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/auto-263785/client.crt: no such file or directory
E0729 13:51:30.436165  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/auto-263785/client.crt: no such file or directory
E0729 13:51:32.996740  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/auto-263785/client.crt: no such file or directory
E0729 13:51:38.117368  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/auto-263785/client.crt: no such file or directory
E0729 13:51:48.357987  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/auto-263785/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-255171 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.3: (1m32.252377758s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (92.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-918148 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d48329ae-7195-4c48-804e-88ca44799c2f] Pending
helpers_test.go:344: "busybox" [d48329ae-7195-4c48-804e-88ca44799c2f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0729 13:52:14.882250  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kindnet-263785/client.crt: no such file or directory
E0729 13:52:14.887578  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kindnet-263785/client.crt: no such file or directory
E0729 13:52:14.897917  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kindnet-263785/client.crt: no such file or directory
E0729 13:52:14.918279  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kindnet-263785/client.crt: no such file or directory
E0729 13:52:14.958645  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kindnet-263785/client.crt: no such file or directory
E0729 13:52:15.039008  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kindnet-263785/client.crt: no such file or directory
helpers_test.go:344: "busybox" [d48329ae-7195-4c48-804e-88ca44799c2f] Running
E0729 13:52:15.199568  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kindnet-263785/client.crt: no such file or directory
E0729 13:52:15.520233  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kindnet-263785/client.crt: no such file or directory
E0729 13:52:16.160828  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kindnet-263785/client.crt: no such file or directory
E0729 13:52:17.441122  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kindnet-263785/client.crt: no such file or directory
E0729 13:52:20.001565  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kindnet-263785/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.005005846s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-918148 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-918148 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-918148 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-918148 --alsologtostderr -v=3
E0729 13:52:25.122110  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kindnet-263785/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-918148 --alsologtostderr -v=3: (13.319850691s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-918148 -n embed-certs-918148
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-918148 -n embed-certs-918148: exit status 7 (75.818621ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-918148 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (428.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-918148 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.3
E0729 13:52:35.362302  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kindnet-263785/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-918148 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.3: (7m8.679669015s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-918148 -n embed-certs-918148
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (428.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-255171 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b1aaadc2-dd7c-4c40-8ef8-eaae7e49cc45] Pending
helpers_test.go:344: "busybox" [b1aaadc2-dd7c-4c40-8ef8-eaae7e49cc45] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b1aaadc2-dd7c-4c40-8ef8-eaae7e49cc45] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003883066s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-255171 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-436965 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c1ecac16-abbb-4bed-a6b7-b8221afe8763] Pending
helpers_test.go:344: "busybox" [c1ecac16-abbb-4bed-a6b7-b8221afe8763] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0729 13:52:46.201413  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/skaffold-110737/client.crt: no such file or directory
helpers_test.go:344: "busybox" [c1ecac16-abbb-4bed-a6b7-b8221afe8763] Running
E0729 13:52:49.799244  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/auto-263785/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004203932s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-436965 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-255171 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0729 13:52:51.504453  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/calico-263785/client.crt: no such file or directory
E0729 13:52:51.509826  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/calico-263785/client.crt: no such file or directory
E0729 13:52:51.520206  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/calico-263785/client.crt: no such file or directory
E0729 13:52:51.540398  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/calico-263785/client.crt: no such file or directory
E0729 13:52:51.581078  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/calico-263785/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-255171 describe deploy/metrics-server -n kube-system
E0729 13:52:51.661185  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/calico-263785/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-255171 --alsologtostderr -v=3
E0729 13:52:51.822259  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/calico-263785/client.crt: no such file or directory
E0729 13:52:52.142978  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/calico-263785/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-255171 --alsologtostderr -v=3: (12.621782416s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-436965 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0729 13:52:52.784064  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/calico-263785/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-436965 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-436965 --alsologtostderr -v=3
E0729 13:52:54.064426  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/calico-263785/client.crt: no such file or directory
E0729 13:52:55.843137  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kindnet-263785/client.crt: no such file or directory
E0729 13:52:56.624738  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/calico-263785/client.crt: no such file or directory
E0729 13:53:01.745075  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/calico-263785/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-436965 --alsologtostderr -v=3: (13.324005622s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-255171 -n default-k8s-diff-port-255171
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-255171 -n default-k8s-diff-port-255171: exit status 7 (65.657264ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-255171 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (388.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-255171 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-255171 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.3: (6m27.988152301s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-255171 -n default-k8s-diff-port-255171
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (388.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (61.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-965778 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-965778 --alsologtostderr -v=3: (1m1.401857908s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (61.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-436965 -n old-k8s-version-436965
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-436965 -n old-k8s-version-436965: exit status 7 (64.739944ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-436965 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (425.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-436965 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
E0729 13:53:11.985832  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/calico-263785/client.crt: no such file or directory
E0729 13:53:14.555706  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/gvisor-218750/client.crt: no such file or directory
E0729 13:53:20.027917  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/custom-flannel-263785/client.crt: no such file or directory
E0729 13:53:20.033203  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/custom-flannel-263785/client.crt: no such file or directory
E0729 13:53:20.043480  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/custom-flannel-263785/client.crt: no such file or directory
E0729 13:53:20.063848  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/custom-flannel-263785/client.crt: no such file or directory
E0729 13:53:20.104180  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/custom-flannel-263785/client.crt: no such file or directory
E0729 13:53:20.184611  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/custom-flannel-263785/client.crt: no such file or directory
E0729 13:53:20.345229  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/custom-flannel-263785/client.crt: no such file or directory
E0729 13:53:20.666175  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/custom-flannel-263785/client.crt: no such file or directory
E0729 13:53:21.307274  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/custom-flannel-263785/client.crt: no such file or directory
E0729 13:53:22.588268  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/custom-flannel-263785/client.crt: no such file or directory
E0729 13:53:25.149004  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/custom-flannel-263785/client.crt: no such file or directory
E0729 13:53:30.269950  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/custom-flannel-263785/client.crt: no such file or directory
E0729 13:53:32.466610  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/calico-263785/client.crt: no such file or directory
E0729 13:53:36.803778  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kindnet-263785/client.crt: no such file or directory
E0729 13:53:40.511135  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/custom-flannel-263785/client.crt: no such file or directory
E0729 13:53:42.240986  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/gvisor-218750/client.crt: no such file or directory
E0729 13:53:45.687045  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/false-263785/client.crt: no such file or directory
E0729 13:53:45.692309  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/false-263785/client.crt: no such file or directory
E0729 13:53:45.702532  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/false-263785/client.crt: no such file or directory
E0729 13:53:45.722828  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/false-263785/client.crt: no such file or directory
E0729 13:53:45.763164  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/false-263785/client.crt: no such file or directory
E0729 13:53:45.843521  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/false-263785/client.crt: no such file or directory
E0729 13:53:46.003975  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/false-263785/client.crt: no such file or directory
E0729 13:53:46.325009  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/false-263785/client.crt: no such file or directory
E0729 13:53:46.965360  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/false-263785/client.crt: no such file or directory
E0729 13:53:48.245856  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/false-263785/client.crt: no such file or directory
E0729 13:53:50.806868  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/false-263785/client.crt: no such file or directory
E0729 13:53:54.783717  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
E0729 13:53:55.927203  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/false-263785/client.crt: no such file or directory
E0729 13:54:00.992187  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/custom-flannel-263785/client.crt: no such file or directory
E0729 13:54:06.167718  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/false-263785/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-436965 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (7m4.879017416s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-436965 -n old-k8s-version-436965
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (425.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-965778 -n no-preload-965778
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-965778 -n no-preload-965778: exit status 7 (80.551642ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-965778 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (70.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-965778 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0-beta.0
E0729 13:54:11.719599  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/auto-263785/client.crt: no such file or directory
E0729 13:54:11.733788  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/functional-673428/client.crt: no such file or directory
E0729 13:54:13.427389  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/calico-263785/client.crt: no such file or directory
E0729 13:54:16.128286  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/enable-default-cni-263785/client.crt: no such file or directory
E0729 13:54:16.133595  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/enable-default-cni-263785/client.crt: no such file or directory
E0729 13:54:16.143962  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/enable-default-cni-263785/client.crt: no such file or directory
E0729 13:54:16.164285  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/enable-default-cni-263785/client.crt: no such file or directory
E0729 13:54:16.204812  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/enable-default-cni-263785/client.crt: no such file or directory
E0729 13:54:16.285170  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/enable-default-cni-263785/client.crt: no such file or directory
E0729 13:54:16.445679  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/enable-default-cni-263785/client.crt: no such file or directory
E0729 13:54:16.766355  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/enable-default-cni-263785/client.crt: no such file or directory
E0729 13:54:17.407075  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/enable-default-cni-263785/client.crt: no such file or directory
E0729 13:54:18.688101  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/enable-default-cni-263785/client.crt: no such file or directory
E0729 13:54:21.248356  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/enable-default-cni-263785/client.crt: no such file or directory
E0729 13:54:26.369232  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/enable-default-cni-263785/client.crt: no such file or directory
E0729 13:54:26.648094  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/false-263785/client.crt: no such file or directory
E0729 13:54:36.610229  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/enable-default-cni-263785/client.crt: no such file or directory
E0729 13:54:41.953258  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/custom-flannel-263785/client.crt: no such file or directory
E0729 13:54:53.915629  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/flannel-263785/client.crt: no such file or directory
E0729 13:54:53.920954  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/flannel-263785/client.crt: no such file or directory
E0729 13:54:53.931293  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/flannel-263785/client.crt: no such file or directory
E0729 13:54:53.951630  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/flannel-263785/client.crt: no such file or directory
E0729 13:54:53.991950  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/flannel-263785/client.crt: no such file or directory
E0729 13:54:54.072300  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/flannel-263785/client.crt: no such file or directory
E0729 13:54:54.232671  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/flannel-263785/client.crt: no such file or directory
E0729 13:54:54.553289  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/flannel-263785/client.crt: no such file or directory
E0729 13:54:55.194113  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/flannel-263785/client.crt: no such file or directory
E0729 13:54:56.474721  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/flannel-263785/client.crt: no such file or directory
E0729 13:54:57.090762  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/enable-default-cni-263785/client.crt: no such file or directory
E0729 13:54:58.724930  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kindnet-263785/client.crt: no such file or directory
E0729 13:54:59.035422  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/flannel-263785/client.crt: no such file or directory
E0729 13:55:04.156452  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/flannel-263785/client.crt: no such file or directory
E0729 13:55:07.608587  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/false-263785/client.crt: no such file or directory
E0729 13:55:14.397446  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/flannel-263785/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-965778 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0-beta.0: (1m10.657959846s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-965778 -n no-preload-965778
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (70.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-zrxvp" [2abca301-50ed-462a-82ec-a63d5d680c60] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-zrxvp" [2abca301-50ed-462a-82ec-a63d5d680c60] Running
E0729 13:55:20.945793  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/bridge-263785/client.crt: no such file or directory
E0729 13:55:20.951118  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/bridge-263785/client.crt: no such file or directory
E0729 13:55:20.961439  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/bridge-263785/client.crt: no such file or directory
E0729 13:55:20.981779  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/bridge-263785/client.crt: no such file or directory
E0729 13:55:21.022122  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/bridge-263785/client.crt: no such file or directory
E0729 13:55:21.102490  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/bridge-263785/client.crt: no such file or directory
E0729 13:55:21.262929  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/bridge-263785/client.crt: no such file or directory
E0729 13:55:21.583590  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/bridge-263785/client.crt: no such file or directory
E0729 13:55:22.224037  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/bridge-263785/client.crt: no such file or directory
E0729 13:55:23.504489  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/bridge-263785/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.00482578s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-zrxvp" [2abca301-50ed-462a-82ec-a63d5d680c60] Running
E0729 13:55:26.064648  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/bridge-263785/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004912517s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-965778 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-965778 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-965778 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-965778 -n no-preload-965778
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-965778 -n no-preload-965778: exit status 2 (232.827386ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-965778 -n no-preload-965778
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-965778 -n no-preload-965778: exit status 2 (250.077082ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-965778 --alsologtostderr -v=1
E0729 13:55:31.185791  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/bridge-263785/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-965778 -n no-preload-965778
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-965778 -n no-preload-965778
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-342669 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0-beta.0
E0729 13:55:34.878243  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/flannel-263785/client.crt: no such file or directory
E0729 13:55:35.347846  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/calico-263785/client.crt: no such file or directory
E0729 13:55:38.051629  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/enable-default-cni-263785/client.crt: no such file or directory
E0729 13:55:41.426912  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/bridge-263785/client.crt: no such file or directory
E0729 13:55:42.695799  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kubenet-263785/client.crt: no such file or directory
E0729 13:55:42.701140  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kubenet-263785/client.crt: no such file or directory
E0729 13:55:42.711407  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kubenet-263785/client.crt: no such file or directory
E0729 13:55:42.731735  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kubenet-263785/client.crt: no such file or directory
E0729 13:55:42.772034  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kubenet-263785/client.crt: no such file or directory
E0729 13:55:42.852404  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kubenet-263785/client.crt: no such file or directory
E0729 13:55:43.013207  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kubenet-263785/client.crt: no such file or directory
E0729 13:55:43.334072  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kubenet-263785/client.crt: no such file or directory
E0729 13:55:43.974435  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kubenet-263785/client.crt: no such file or directory
E0729 13:55:45.255480  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kubenet-263785/client.crt: no such file or directory
E0729 13:55:47.816497  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kubenet-263785/client.crt: no such file or directory
E0729 13:55:52.937596  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kubenet-263785/client.crt: no such file or directory
E0729 13:56:01.907814  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/bridge-263785/client.crt: no such file or directory
E0729 13:56:03.178193  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kubenet-263785/client.crt: no such file or directory
E0729 13:56:03.873458  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/custom-flannel-263785/client.crt: no such file or directory
E0729 13:56:12.026551  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/addons-543070/client.crt: no such file or directory
E0729 13:56:15.839465  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/flannel-263785/client.crt: no such file or directory
E0729 13:56:23.659144  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kubenet-263785/client.crt: no such file or directory
E0729 13:56:27.876029  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/auto-263785/client.crt: no such file or directory
E0729 13:56:29.529117  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/false-263785/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-342669 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0-beta.0: (1m0.60032079s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-342669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-342669 --alsologtostderr -v=3
E0729 13:56:42.869011  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/bridge-263785/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-342669 --alsologtostderr -v=3: (12.673840596s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-342669 -n newest-cni-342669
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-342669 -n newest-cni-342669: exit status 7 (64.576428ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-342669 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (39.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-342669 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0-beta.0
E0729 13:56:55.560699  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/auto-263785/client.crt: no such file or directory
E0729 13:56:59.972682  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/enable-default-cni-263785/client.crt: no such file or directory
E0729 13:57:04.620271  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kubenet-263785/client.crt: no such file or directory
E0729 13:57:14.882241  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/kindnet-263785/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-342669 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0-beta.0: (39.312848981s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-342669 -n newest-cni-342669
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (39.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-342669 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-342669 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-342669 -n newest-cni-342669
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-342669 -n newest-cni-342669: exit status 2 (264.089878ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-342669 -n newest-cni-342669
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-342669 -n newest-cni-342669: exit status 2 (254.200822ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-342669 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-342669 -n newest-cni-342669
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-342669 -n newest-cni-342669
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-z7bmz" [54ae86fb-0013-4d2d-a680-56168cf9cf62] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-779776cb65-z7bmz" [54ae86fb-0013-4d2d-a680-56168cf9cf62] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.003966255s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-tbgmq" [2804254e-1fd4-4580-9db3-0c364ddc6694] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0729 13:59:43.813508  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/enable-default-cni-263785/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-779776cb65-tbgmq" [2804254e-1fd4-4580-9db3-0c364ddc6694] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.005000627s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-z7bmz" [54ae86fb-0013-4d2d-a680-56168cf9cf62] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003981504s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-255171 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-255171 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-255171 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-255171 -n default-k8s-diff-port-255171
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-255171 -n default-k8s-diff-port-255171: exit status 2 (236.715293ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-255171 -n default-k8s-diff-port-255171
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-255171 -n default-k8s-diff-port-255171: exit status 2 (240.611326ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-255171 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-255171 -n default-k8s-diff-port-255171
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-255171 -n default-k8s-diff-port-255171
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-tbgmq" [2804254e-1fd4-4580-9db3-0c364ddc6694] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003722844s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-918148 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-918148 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-918148 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-918148 -n embed-certs-918148
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-918148 -n embed-certs-918148: exit status 2 (243.970991ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-918148 -n embed-certs-918148
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-918148 -n embed-certs-918148: exit status 2 (238.360498ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-918148 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-918148 -n embed-certs-918148
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-918148 -n embed-certs-918148
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-zrkgm" [3007ee36-6705-44b0-9c99-4c4a21281951] Running
E0729 14:00:17.746833  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/no-preload-965778/client.crt: no such file or directory
E0729 14:00:17.752133  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/no-preload-965778/client.crt: no such file or directory
E0729 14:00:17.762445  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/no-preload-965778/client.crt: no such file or directory
E0729 14:00:17.782770  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/no-preload-965778/client.crt: no such file or directory
E0729 14:00:17.823141  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/no-preload-965778/client.crt: no such file or directory
E0729 14:00:17.903620  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/no-preload-965778/client.crt: no such file or directory
E0729 14:00:18.064162  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/no-preload-965778/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004013565s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-zrkgm" [3007ee36-6705-44b0-9c99-4c4a21281951] Running
E0729 14:00:18.385277  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/no-preload-965778/client.crt: no such file or directory
E0729 14:00:19.026244  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/no-preload-965778/client.crt: no such file or directory
E0729 14:00:20.307130  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/no-preload-965778/client.crt: no such file or directory
E0729 14:00:20.945518  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/bridge-263785/client.crt: no such file or directory
E0729 14:00:21.601202  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/flannel-263785/client.crt: no such file or directory
E0729 14:00:22.868100  186951 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19338-179709/.minikube/profiles/no-preload-965778/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004238737s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-436965 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-436965 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-436965 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-436965 -n old-k8s-version-436965
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-436965 -n old-k8s-version-436965: exit status 2 (252.183849ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-436965 -n old-k8s-version-436965
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-436965 -n old-k8s-version-436965: exit status 2 (247.653768ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-436965 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-436965 -n old-k8s-version-436965
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-436965 -n old-k8s-version-436965
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.28s)

                                                
                                    

Test skip (29/350)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-263785 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-263785

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-263785

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-263785

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-263785

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-263785

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-263785

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-263785

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-263785

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-263785

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-263785

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-263785

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-263785" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-263785" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-263785" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-263785" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-263785" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-263785" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-263785" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-263785" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-263785

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-263785

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-263785" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-263785" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-263785

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-263785

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-263785" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-263785" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-263785" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-263785" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-263785" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-263785

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-263785" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-263785"

                                                
                                                
----------------------- debugLogs end: cilium-263785 [took: 3.492028591s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-263785" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-263785
--- SKIP: TestNetworkPlugins/group/cilium (3.65s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-307621" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-307621
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard