Test Report: KVM_Linux_crio 22332

                    
                      56e1ce855180c73f84c0d958e6323d58f60b3065:2025-12-27:43013
                    
                

Test fail (1/355)

Order failed test Duration
84 TestFunctional/serial/ComponentHealth 1.96
x
+
TestFunctional/serial/ComponentHealth (1.96s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-866869 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:848: etcd is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:True} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.39.233 PodIP:192.168.39.233 StartTime:2025-12-27 20:02:56 +0000 UTC ContainerStatuses:[{Name:etcd State:{Waiting:<nil> Running:0xc001f5bf50 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc001d7f180} Ready:false RestartCount:2 Image:registry.k8s.io/etcd:3.6.6-0 ImageID:registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a ContainerID:cri-o://2d79b661aef28434e47e009490240e32f12ccd12aa87a23f4c8cd21f83bb358b}]}
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-866869 -n functional-866869
helpers_test.go:253: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-866869 logs -n 25: (1.322210369s)
helpers_test.go:261: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                     │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ nospam-067430 --log_dir /tmp/nospam-067430 unpause                                                          │ nospam-067430     │ jenkins │ v1.37.0 │ 27 Dec 25 19:59 UTC │ 27 Dec 25 19:59 UTC │
	│ unpause │ nospam-067430 --log_dir /tmp/nospam-067430 unpause                                                          │ nospam-067430     │ jenkins │ v1.37.0 │ 27 Dec 25 19:59 UTC │ 27 Dec 25 19:59 UTC │
	│ unpause │ nospam-067430 --log_dir /tmp/nospam-067430 unpause                                                          │ nospam-067430     │ jenkins │ v1.37.0 │ 27 Dec 25 19:59 UTC │ 27 Dec 25 19:59 UTC │
	│ stop    │ nospam-067430 --log_dir /tmp/nospam-067430 stop                                                             │ nospam-067430     │ jenkins │ v1.37.0 │ 27 Dec 25 19:59 UTC │ 27 Dec 25 20:00 UTC │
	│ stop    │ nospam-067430 --log_dir /tmp/nospam-067430 stop                                                             │ nospam-067430     │ jenkins │ v1.37.0 │ 27 Dec 25 20:00 UTC │ 27 Dec 25 20:00 UTC │
	│ stop    │ nospam-067430 --log_dir /tmp/nospam-067430 stop                                                             │ nospam-067430     │ jenkins │ v1.37.0 │ 27 Dec 25 20:00 UTC │ 27 Dec 25 20:00 UTC │
	│ delete  │ -p nospam-067430                                                                                            │ nospam-067430     │ jenkins │ v1.37.0 │ 27 Dec 25 20:00 UTC │ 27 Dec 25 20:00 UTC │
	│ start   │ -p functional-866869 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio │ functional-866869 │ jenkins │ v1.37.0 │ 27 Dec 25 20:00 UTC │ 27 Dec 25 20:01 UTC │
	│ start   │ -p functional-866869 --alsologtostderr -v=8                                                                 │ functional-866869 │ jenkins │ v1.37.0 │ 27 Dec 25 20:01 UTC │ 27 Dec 25 20:02 UTC │
	│ cache   │ functional-866869 cache add registry.k8s.io/pause:3.1                                                       │ functional-866869 │ jenkins │ v1.37.0 │ 27 Dec 25 20:02 UTC │ 27 Dec 25 20:02 UTC │
	│ cache   │ functional-866869 cache add registry.k8s.io/pause:3.3                                                       │ functional-866869 │ jenkins │ v1.37.0 │ 27 Dec 25 20:02 UTC │ 27 Dec 25 20:02 UTC │
	│ cache   │ functional-866869 cache add registry.k8s.io/pause:latest                                                    │ functional-866869 │ jenkins │ v1.37.0 │ 27 Dec 25 20:02 UTC │ 27 Dec 25 20:02 UTC │
	│ cache   │ functional-866869 cache add minikube-local-cache-test:functional-866869                                     │ functional-866869 │ jenkins │ v1.37.0 │ 27 Dec 25 20:02 UTC │ 27 Dec 25 20:02 UTC │
	│ cache   │ functional-866869 cache delete minikube-local-cache-test:functional-866869                                  │ functional-866869 │ jenkins │ v1.37.0 │ 27 Dec 25 20:02 UTC │ 27 Dec 25 20:02 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                            │ minikube          │ jenkins │ v1.37.0 │ 27 Dec 25 20:02 UTC │ 27 Dec 25 20:02 UTC │
	│ cache   │ list                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 27 Dec 25 20:02 UTC │ 27 Dec 25 20:02 UTC │
	│ ssh     │ functional-866869 ssh sudo crictl images                                                                    │ functional-866869 │ jenkins │ v1.37.0 │ 27 Dec 25 20:02 UTC │ 27 Dec 25 20:02 UTC │
	│ ssh     │ functional-866869 ssh sudo crictl rmi registry.k8s.io/pause:latest                                          │ functional-866869 │ jenkins │ v1.37.0 │ 27 Dec 25 20:02 UTC │ 27 Dec 25 20:02 UTC │
	│ ssh     │ functional-866869 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                     │ functional-866869 │ jenkins │ v1.37.0 │ 27 Dec 25 20:02 UTC │                     │
	│ cache   │ functional-866869 cache reload                                                                              │ functional-866869 │ jenkins │ v1.37.0 │ 27 Dec 25 20:02 UTC │ 27 Dec 25 20:02 UTC │
	│ ssh     │ functional-866869 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                     │ functional-866869 │ jenkins │ v1.37.0 │ 27 Dec 25 20:02 UTC │ 27 Dec 25 20:02 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                            │ minikube          │ jenkins │ v1.37.0 │ 27 Dec 25 20:02 UTC │ 27 Dec 25 20:02 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                         │ minikube          │ jenkins │ v1.37.0 │ 27 Dec 25 20:02 UTC │ 27 Dec 25 20:02 UTC │
	│ kubectl │ functional-866869 kubectl -- --context functional-866869 get pods                                           │ functional-866869 │ jenkins │ v1.37.0 │ 27 Dec 25 20:02 UTC │ 27 Dec 25 20:02 UTC │
	│ start   │ -p functional-866869 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all    │ functional-866869 │ jenkins │ v1.37.0 │ 27 Dec 25 20:02 UTC │ 27 Dec 25 20:03 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:02:34
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:02:34.156903   67071 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:02:34.156996   67071 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:02:34.156998   67071 out.go:374] Setting ErrFile to fd 2...
	I1227 20:02:34.157001   67071 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:02:34.157233   67071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-59055/.minikube/bin
	I1227 20:02:34.157669   67071 out.go:368] Setting JSON to false
	I1227 20:02:34.158534   67071 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6304,"bootTime":1766859450,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 20:02:34.158634   67071 start.go:143] virtualization: kvm guest
	I1227 20:02:34.160408   67071 out.go:179] * [functional-866869] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 20:02:34.162199   67071 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:02:34.162223   67071 notify.go:221] Checking for updates...
	I1227 20:02:34.164209   67071 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:02:34.165326   67071 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-59055/kubeconfig
	I1227 20:02:34.166493   67071 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-59055/.minikube
	I1227 20:02:34.167464   67071 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 20:02:34.168682   67071 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:02:34.170291   67071 config.go:182] Loaded profile config "functional-866869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:02:34.170394   67071 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:02:34.203306   67071 out.go:179] * Using the kvm2 driver based on existing profile
	I1227 20:02:34.204654   67071 start.go:309] selected driver: kvm2
	I1227 20:02:34.204664   67071 start.go:928] validating driver "kvm2" against &{Name:functional-866869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22332/minikube-v1.37.0-1766811082-22332-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0 ClusterName:functional-866869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:02:34.204787   67071 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:02:34.205664   67071 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:02:34.205689   67071 cni.go:84] Creating CNI manager for ""
	I1227 20:02:34.205763   67071 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1227 20:02:34.205810   67071 start.go:353] cluster config:
	{Name:functional-866869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22332/minikube-v1.37.0-1766811082-22332-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-866869 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:02:34.205891   67071 iso.go:125] acquiring lock: {Name:mka43d70ce37123bef7d956775bb3b0726c5ddc8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:02:34.207500   67071 out.go:179] * Starting "functional-866869" primary control-plane node in "functional-866869" cluster
	I1227 20:02:34.208594   67071 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:02:34.208623   67071 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-59055/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 20:02:34.208630   67071 cache.go:65] Caching tarball of preloaded images
	I1227 20:02:34.208792   67071 preload.go:251] Found /home/jenkins/minikube-integration/22332-59055/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 20:02:34.208800   67071 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:02:34.208885   67071 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/config.json ...
	I1227 20:02:34.209083   67071 start.go:360] acquireMachinesLock for functional-866869: {Name:mka9931fb06a62e71d190bf45bd86894fc3ea87e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1227 20:02:34.209122   67071 start.go:364] duration metric: took 28.137µs to acquireMachinesLock for "functional-866869"
	I1227 20:02:34.209137   67071 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:02:34.209141   67071 fix.go:54] fixHost starting: 
	I1227 20:02:34.210853   67071 fix.go:112] recreateIfNeeded on functional-866869: state=Running err=<nil>
	W1227 20:02:34.210876   67071 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:02:34.212436   67071 out.go:252] * Updating the running kvm2 "functional-866869" VM ...
	I1227 20:02:34.212455   67071 machine.go:94] provisionDockerMachine start ...
	I1227 20:02:34.215124   67071 main.go:144] libmachine: domain functional-866869 has defined MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:34.215589   67071 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:61:ec:a4", ip: ""} in network mk-functional-866869: {Iface:virbr1 ExpiryTime:2025-12-27 21:00:42 +0000 UTC Type:0 Mac:52:54:00:61:ec:a4 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:functional-866869 Clientid:01:52:54:00:61:ec:a4}
	I1227 20:02:34.215608   67071 main.go:144] libmachine: domain functional-866869 has defined IP address 192.168.39.233 and MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:34.215810   67071 main.go:144] libmachine: Using SSH client type: native
	I1227 20:02:34.216047   67071 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I1227 20:02:34.216052   67071 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:02:34.327023   67071 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-866869
	
	I1227 20:02:34.327043   67071 buildroot.go:166] provisioning hostname "functional-866869"
	I1227 20:02:34.329774   67071 main.go:144] libmachine: domain functional-866869 has defined MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:34.330145   67071 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:61:ec:a4", ip: ""} in network mk-functional-866869: {Iface:virbr1 ExpiryTime:2025-12-27 21:00:42 +0000 UTC Type:0 Mac:52:54:00:61:ec:a4 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:functional-866869 Clientid:01:52:54:00:61:ec:a4}
	I1227 20:02:34.330167   67071 main.go:144] libmachine: domain functional-866869 has defined IP address 192.168.39.233 and MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:34.330399   67071 main.go:144] libmachine: Using SSH client type: native
	I1227 20:02:34.330600   67071 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I1227 20:02:34.330606   67071 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-866869 && echo "functional-866869" | sudo tee /etc/hostname
	I1227 20:02:34.459572   67071 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-866869
	
	I1227 20:02:34.462412   67071 main.go:144] libmachine: domain functional-866869 has defined MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:34.462839   67071 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:61:ec:a4", ip: ""} in network mk-functional-866869: {Iface:virbr1 ExpiryTime:2025-12-27 21:00:42 +0000 UTC Type:0 Mac:52:54:00:61:ec:a4 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:functional-866869 Clientid:01:52:54:00:61:ec:a4}
	I1227 20:02:34.462867   67071 main.go:144] libmachine: domain functional-866869 has defined IP address 192.168.39.233 and MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:34.463045   67071 main.go:144] libmachine: Using SSH client type: native
	I1227 20:02:34.463265   67071 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I1227 20:02:34.463275   67071 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-866869' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-866869/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-866869' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:02:34.577329   67071 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:02:34.577362   67071 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22332-59055/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-59055/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-59055/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-59055/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-59055/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-59055/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-59055/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-59055/.minikube}
	I1227 20:02:34.577387   67071 buildroot.go:174] setting up certificates
	I1227 20:02:34.577402   67071 provision.go:84] configureAuth start
	I1227 20:02:34.580888   67071 main.go:144] libmachine: domain functional-866869 has defined MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:34.581359   67071 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:61:ec:a4", ip: ""} in network mk-functional-866869: {Iface:virbr1 ExpiryTime:2025-12-27 21:00:42 +0000 UTC Type:0 Mac:52:54:00:61:ec:a4 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:functional-866869 Clientid:01:52:54:00:61:ec:a4}
	I1227 20:02:34.581377   67071 main.go:144] libmachine: domain functional-866869 has defined IP address 192.168.39.233 and MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:34.583968   67071 main.go:144] libmachine: domain functional-866869 has defined MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:34.584357   67071 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:61:ec:a4", ip: ""} in network mk-functional-866869: {Iface:virbr1 ExpiryTime:2025-12-27 21:00:42 +0000 UTC Type:0 Mac:52:54:00:61:ec:a4 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:functional-866869 Clientid:01:52:54:00:61:ec:a4}
	I1227 20:02:34.584372   67071 main.go:144] libmachine: domain functional-866869 has defined IP address 192.168.39.233 and MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:34.584516   67071 provision.go:143] copyHostCerts
	I1227 20:02:34.584573   67071 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-59055/.minikube/ca.pem, removing ...
	I1227 20:02:34.584593   67071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-59055/.minikube/ca.pem
	I1227 20:02:34.584682   67071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-59055/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-59055/.minikube/ca.pem (1078 bytes)
	I1227 20:02:34.584824   67071 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-59055/.minikube/cert.pem, removing ...
	I1227 20:02:34.584830   67071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-59055/.minikube/cert.pem
	I1227 20:02:34.584860   67071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-59055/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-59055/.minikube/cert.pem (1123 bytes)
	I1227 20:02:34.584921   67071 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-59055/.minikube/key.pem, removing ...
	I1227 20:02:34.584924   67071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-59055/.minikube/key.pem
	I1227 20:02:34.584946   67071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-59055/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-59055/.minikube/key.pem (1679 bytes)
	I1227 20:02:34.585003   67071 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-59055/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-59055/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-59055/.minikube/certs/ca-key.pem org=jenkins.functional-866869 san=[127.0.0.1 192.168.39.233 functional-866869 localhost minikube]
	I1227 20:02:34.669118   67071 provision.go:177] copyRemoteCerts
	I1227 20:02:34.669181   67071 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:02:34.672044   67071 main.go:144] libmachine: domain functional-866869 has defined MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:34.672449   67071 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:61:ec:a4", ip: ""} in network mk-functional-866869: {Iface:virbr1 ExpiryTime:2025-12-27 21:00:42 +0000 UTC Type:0 Mac:52:54:00:61:ec:a4 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:functional-866869 Clientid:01:52:54:00:61:ec:a4}
	I1227 20:02:34.672467   67071 main.go:144] libmachine: domain functional-866869 has defined IP address 192.168.39.233 and MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:34.672636   67071 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22332-59055/.minikube/machines/functional-866869/id_rsa Username:docker}
	I1227 20:02:34.761346   67071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-59055/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:02:34.795607   67071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-59055/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 20:02:34.828674   67071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-59055/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:02:34.861087   67071 provision.go:87] duration metric: took 283.6692ms to configureAuth
	I1227 20:02:34.861114   67071 buildroot.go:189] setting minikube options for container-runtime
	I1227 20:02:34.861341   67071 config.go:182] Loaded profile config "functional-866869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:02:34.864648   67071 main.go:144] libmachine: domain functional-866869 has defined MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:34.865159   67071 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:61:ec:a4", ip: ""} in network mk-functional-866869: {Iface:virbr1 ExpiryTime:2025-12-27 21:00:42 +0000 UTC Type:0 Mac:52:54:00:61:ec:a4 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:functional-866869 Clientid:01:52:54:00:61:ec:a4}
	I1227 20:02:34.865185   67071 main.go:144] libmachine: domain functional-866869 has defined IP address 192.168.39.233 and MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:34.865425   67071 main.go:144] libmachine: Using SSH client type: native
	I1227 20:02:34.865646   67071 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I1227 20:02:34.865655   67071 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:02:35.446621   67071 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:02:35.446644   67071 machine.go:97] duration metric: took 1.23418137s to provisionDockerMachine
	I1227 20:02:35.446659   67071 start.go:293] postStartSetup for "functional-866869" (driver="kvm2")
	I1227 20:02:35.446671   67071 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:02:35.446753   67071 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:02:35.450071   67071 main.go:144] libmachine: domain functional-866869 has defined MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:35.450573   67071 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:61:ec:a4", ip: ""} in network mk-functional-866869: {Iface:virbr1 ExpiryTime:2025-12-27 21:00:42 +0000 UTC Type:0 Mac:52:54:00:61:ec:a4 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:functional-866869 Clientid:01:52:54:00:61:ec:a4}
	I1227 20:02:35.450619   67071 main.go:144] libmachine: domain functional-866869 has defined IP address 192.168.39.233 and MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:35.450805   67071 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22332-59055/.minikube/machines/functional-866869/id_rsa Username:docker}
	I1227 20:02:35.541183   67071 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:02:35.546706   67071 info.go:137] Remote host: Buildroot 2025.02
	I1227 20:02:35.546760   67071 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-59055/.minikube/addons for local assets ...
	I1227 20:02:35.546849   67071 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-59055/.minikube/files for local assets ...
	I1227 20:02:35.546922   67071 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-59055/.minikube/files/etc/ssl/certs/629372.pem -> 629372.pem in /etc/ssl/certs
	I1227 20:02:35.546988   67071 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-59055/.minikube/files/etc/test/nested/copy/62937/hosts -> hosts in /etc/test/nested/copy/62937
	I1227 20:02:35.547032   67071 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/62937
	I1227 20:02:35.559556   67071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-59055/.minikube/files/etc/ssl/certs/629372.pem --> /etc/ssl/certs/629372.pem (1708 bytes)
	I1227 20:02:35.596383   67071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-59055/.minikube/files/etc/test/nested/copy/62937/hosts --> /etc/test/nested/copy/62937/hosts (40 bytes)
	I1227 20:02:35.630158   67071 start.go:296] duration metric: took 183.478668ms for postStartSetup
	I1227 20:02:35.630236   67071 fix.go:56] duration metric: took 1.421059499s for fixHost
	I1227 20:02:35.633574   67071 main.go:144] libmachine: domain functional-866869 has defined MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:35.633995   67071 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:61:ec:a4", ip: ""} in network mk-functional-866869: {Iface:virbr1 ExpiryTime:2025-12-27 21:00:42 +0000 UTC Type:0 Mac:52:54:00:61:ec:a4 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:functional-866869 Clientid:01:52:54:00:61:ec:a4}
	I1227 20:02:35.634016   67071 main.go:144] libmachine: domain functional-866869 has defined IP address 192.168.39.233 and MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:35.634200   67071 main.go:144] libmachine: Using SSH client type: native
	I1227 20:02:35.634492   67071 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I1227 20:02:35.634500   67071 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1227 20:02:35.749619   67071 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766865755.744614389
	
	I1227 20:02:35.749634   67071 fix.go:216] guest clock: 1766865755.744614389
	I1227 20:02:35.749640   67071 fix.go:229] Guest: 2025-12-27 20:02:35.744614389 +0000 UTC Remote: 2025-12-27 20:02:35.630241746 +0000 UTC m=+1.522455046 (delta=114.372643ms)
	I1227 20:02:35.749656   67071 fix.go:200] guest clock delta is within tolerance: 114.372643ms
	I1227 20:02:35.749660   67071 start.go:83] releasing machines lock for "functional-866869", held for 1.540533102s
	I1227 20:02:35.752689   67071 main.go:144] libmachine: domain functional-866869 has defined MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:35.753046   67071 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:61:ec:a4", ip: ""} in network mk-functional-866869: {Iface:virbr1 ExpiryTime:2025-12-27 21:00:42 +0000 UTC Type:0 Mac:52:54:00:61:ec:a4 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:functional-866869 Clientid:01:52:54:00:61:ec:a4}
	I1227 20:02:35.753064   67071 main.go:144] libmachine: domain functional-866869 has defined IP address 192.168.39.233 and MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:35.753696   67071 ssh_runner.go:195] Run: cat /version.json
	I1227 20:02:35.753803   67071 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:02:35.756645   67071 main.go:144] libmachine: domain functional-866869 has defined MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:35.757045   67071 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:61:ec:a4", ip: ""} in network mk-functional-866869: {Iface:virbr1 ExpiryTime:2025-12-27 21:00:42 +0000 UTC Type:0 Mac:52:54:00:61:ec:a4 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:functional-866869 Clientid:01:52:54:00:61:ec:a4}
	I1227 20:02:35.757061   67071 main.go:144] libmachine: domain functional-866869 has defined IP address 192.168.39.233 and MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:35.757128   67071 main.go:144] libmachine: domain functional-866869 has defined MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:35.757250   67071 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22332-59055/.minikube/machines/functional-866869/id_rsa Username:docker}
	I1227 20:02:35.757677   67071 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:61:ec:a4", ip: ""} in network mk-functional-866869: {Iface:virbr1 ExpiryTime:2025-12-27 21:00:42 +0000 UTC Type:0 Mac:52:54:00:61:ec:a4 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:functional-866869 Clientid:01:52:54:00:61:ec:a4}
	I1227 20:02:35.757701   67071 main.go:144] libmachine: domain functional-866869 has defined IP address 192.168.39.233 and MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:35.757878   67071 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22332-59055/.minikube/machines/functional-866869/id_rsa Username:docker}
	I1227 20:02:35.839243   67071 ssh_runner.go:195] Run: systemctl --version
	I1227 20:02:35.863879   67071 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:02:36.011937   67071 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:02:36.019383   67071 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:02:36.019458   67071 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:02:36.031847   67071 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:02:36.031866   67071 start.go:496] detecting cgroup driver to use...
	I1227 20:02:36.031891   67071 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
	I1227 20:02:36.031968   67071 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:02:36.053057   67071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:02:36.071206   67071 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:02:36.071256   67071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:02:36.094078   67071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:02:36.111432   67071 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:02:36.336363   67071 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:02:36.550167   67071 docker.go:234] disabling docker service ...
	I1227 20:02:36.550237   67071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:02:36.581308   67071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:02:36.598501   67071 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:02:36.822473   67071 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:02:37.023655   67071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:02:37.040845   67071 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:02:37.065547   67071 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:02:37.065627   67071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:02:37.079160   67071 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 20:02:37.079233   67071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:02:37.093547   67071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:02:37.107189   67071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:02:37.121098   67071 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:02:37.134917   67071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:02:37.147946   67071 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:02:37.161940   67071 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:02:37.175520   67071 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:02:37.187633   67071 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:02:37.201891   67071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:02:37.415049   67071 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:02:37.874955   67071 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:02:37.875033   67071 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:02:37.881399   67071 start.go:574] Will wait 60s for crictl version
	I1227 20:02:37.881473   67071 ssh_runner.go:195] Run: which crictl
	I1227 20:02:37.886192   67071 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1227 20:02:37.919852   67071 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1227 20:02:37.919966   67071 ssh_runner.go:195] Run: crio --version
	I1227 20:02:37.954373   67071 ssh_runner.go:195] Run: crio --version
	I1227 20:02:37.990227   67071 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1227 20:02:37.994457   67071 main.go:144] libmachine: domain functional-866869 has defined MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:37.994977   67071 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:61:ec:a4", ip: ""} in network mk-functional-866869: {Iface:virbr1 ExpiryTime:2025-12-27 21:00:42 +0000 UTC Type:0 Mac:52:54:00:61:ec:a4 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:functional-866869 Clientid:01:52:54:00:61:ec:a4}
	I1227 20:02:37.995001   67071 main.go:144] libmachine: domain functional-866869 has defined IP address 192.168.39.233 and MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
	I1227 20:02:37.995188   67071 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1227 20:02:38.001807   67071 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1227 20:02:38.003260   67071 kubeadm.go:884] updating cluster {Name:functional-866869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22332/minikube-v1.37.0-1766811082-22332-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0 ClusterName:functional-866869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:02:38.003413   67071 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:02:38.003470   67071 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:02:38.046840   67071 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:02:38.046858   67071 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:02:38.046911   67071 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:02:38.080007   67071 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:02:38.080025   67071 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:02:38.080035   67071 kubeadm.go:935] updating node { 192.168.39.233 8441 v1.35.0 crio true true} ...
	I1227 20:02:38.080158   67071 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-866869 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:functional-866869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:02:38.080237   67071 ssh_runner.go:195] Run: crio config
	I1227 20:02:38.137458   67071 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1227 20:02:38.137498   67071 cni.go:84] Creating CNI manager for ""
	I1227 20:02:38.137509   67071 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1227 20:02:38.137518   67071 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:02:38.137540   67071 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.233 APIServerPort:8441 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-866869 NodeName:functional-866869 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.233 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:02:38.137675   67071 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.233
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-866869"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.233"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.233"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:02:38.137752   67071 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:02:38.151970   67071 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:02:38.152054   67071 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:02:38.165274   67071 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1227 20:02:38.187251   67071 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:02:38.209856   67071 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I1227 20:02:38.233216   67071 ssh_runner.go:195] Run: grep 192.168.39.233	control-plane.minikube.internal$ /etc/hosts
	I1227 20:02:38.238172   67071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:02:38.450831   67071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:02:38.469748   67071 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869 for IP: 192.168.39.233
	I1227 20:02:38.469761   67071 certs.go:195] generating shared ca certs ...
	I1227 20:02:38.469777   67071 certs.go:227] acquiring lock for ca certs: {Name:mkaababc7dc2fa0b2cccf395a6ff1958c07efd0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:02:38.469994   67071 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-59055/.minikube/ca.key
	I1227 20:02:38.470050   67071 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-59055/.minikube/proxy-client-ca.key
	I1227 20:02:38.470056   67071 certs.go:257] generating profile certs ...
	I1227 20:02:38.470147   67071 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.key
	I1227 20:02:38.470188   67071 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/apiserver.key.585845fc
	I1227 20:02:38.470222   67071 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/proxy-client.key
	I1227 20:02:38.470327   67071 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-59055/.minikube/certs/62937.pem (1338 bytes)
	W1227 20:02:38.470358   67071 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-59055/.minikube/certs/62937_empty.pem, impossibly tiny 0 bytes
	I1227 20:02:38.470363   67071 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-59055/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 20:02:38.470384   67071 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-59055/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:02:38.470406   67071 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-59055/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:02:38.470424   67071 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-59055/.minikube/certs/key.pem (1679 bytes)
	I1227 20:02:38.470462   67071 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-59055/.minikube/files/etc/ssl/certs/629372.pem (1708 bytes)
	I1227 20:02:38.471158   67071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-59055/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:02:38.507361   67071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-59055/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:02:38.541342   67071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-59055/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:02:38.572863   67071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-59055/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:02:38.604830   67071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 20:02:38.640648   67071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:02:38.673006   67071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:02:38.706379   67071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 20:02:38.740498   67071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-59055/.minikube/certs/62937.pem --> /usr/share/ca-certificates/62937.pem (1338 bytes)
	I1227 20:02:38.773430   67071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-59055/.minikube/files/etc/ssl/certs/629372.pem --> /usr/share/ca-certificates/629372.pem (1708 bytes)
	I1227 20:02:38.805174   67071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-59055/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:02:38.837663   67071 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:02:38.859754   67071 ssh_runner.go:195] Run: openssl version
	I1227 20:02:38.866682   67071 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:02:38.879324   67071 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:02:38.895053   67071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:02:38.902318   67071 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:02:38.902373   67071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:02:38.910129   67071 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:02:38.922631   67071 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/62937.pem
	I1227 20:02:38.935684   67071 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/62937.pem /etc/ssl/certs/62937.pem
	I1227 20:02:38.948343   67071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/62937.pem
	I1227 20:02:38.954792   67071 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/62937.pem
	I1227 20:02:38.954847   67071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/62937.pem
	I1227 20:02:38.962851   67071 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:02:38.975775   67071 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/629372.pem
	I1227 20:02:38.989491   67071 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/629372.pem /etc/ssl/certs/629372.pem
	I1227 20:02:39.002163   67071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629372.pem
	I1227 20:02:39.008349   67071 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/629372.pem
	I1227 20:02:39.008409   67071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629372.pem
	I1227 20:02:39.016617   67071 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:02:39.030577   67071 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:02:39.036937   67071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:02:39.044581   67071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:02:39.052507   67071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:02:39.060063   67071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:02:39.068040   67071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:02:39.075544   67071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:02:39.082869   67071 kubeadm.go:401] StartCluster: {Name:functional-866869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22332/minikube-v1.37.0-1766811082-22332-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0 ClusterName:functional-866869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:02:39.082951   67071 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:02:39.083017   67071 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:02:39.122195   67071 cri.go:96] found id: "ce191c599168870ba1b21a5aea73f026100dca03a8f5de06ee9f8c4625002569"
	I1227 20:02:39.122212   67071 cri.go:96] found id: "fdcacbca0e88cc7b2078f29fed3d52a0240d76b4af367957526f13d46c6ae327"
	I1227 20:02:39.122216   67071 cri.go:96] found id: "bd9d719e21a64fbb304f5c5880c24405c3114395ca851c56caa1fb1eed13d3de"
	I1227 20:02:39.122220   67071 cri.go:96] found id: "0c0fc056da8e6ab0f45b663c742ccd950720771653b60978bdf6d2b91ac6a56f"
	I1227 20:02:39.122224   67071 cri.go:96] found id: "a3b78daea5b561743fdcfaf4ff523bf3eb65aa9cf7650cdfad11ae53c6c39109"
	I1227 20:02:39.122227   67071 cri.go:96] found id: "7cc6ce7f1e32d80636f28c3de137b596be9ea7d125b48649e8a8a1d596b1832d"
	I1227 20:02:39.122230   67071 cri.go:96] found id: "f5ee32319e9274cd30cc9cb7ccc8dc3593153810b117a710339c041209f9142a"
	I1227 20:02:39.122233   67071 cri.go:96] found id: "184fea3e33981ccb8efdf67323a0d5fd4f6cde9d2963ed1cd9db428879de196f"
	I1227 20:02:39.122236   67071 cri.go:96] found id: "2c3773082525b68caef292485a709f468a4ca6174b9b03037fa82769109a7ff5"
	I1227 20:02:39.122266   67071 cri.go:96] found id: "74f9eadffeb3f22331e3806fc23ff902a743d9444d1543849836174980ed7096"
	I1227 20:02:39.122270   67071 cri.go:96] found id: "716c479bfd66fd1d4795b7fb25f8db84901323555444ecc4711defff384fae8c"
	I1227 20:02:39.122273   67071 cri.go:96] found id: "95b6278a1ffacc0a433008eb9c0f0d032b4b98534901dc588e99faa2c7d8114f"
	I1227 20:02:39.122276   67071 cri.go:96] found id: "14a0a905e7f2b7a2d9cd72441b0565f562ad83c238deeee6c8767769320c93f7"
	I1227 20:02:39.122279   67071 cri.go:96] found id: "184a5fc99508f1c6a749a3c91042358fdd4df627d6f33ad81a6b8af6a5a267e7"
	I1227 20:02:39.122282   67071 cri.go:96] found id: ""
	I1227 20:02:39.122347   67071 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-866869 -n functional-866869
helpers_test.go:270: (dbg) Run:  kubectl --context functional-866869 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestFunctional/serial/ComponentHealth FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/ComponentHealth (1.96s)

                                                
                                    

Test pass (314/355)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.47
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.35.0/json-events 3.05
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.07
18 TestDownloadOnly/v1.35.0/DeleteAll 0.16
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.63
22 TestOffline 76.34
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 124.32
31 TestAddons/serial/GCPAuth/Namespaces 0.16
32 TestAddons/serial/GCPAuth/FakeCredentials 8.53
35 TestAddons/parallel/Registry 16.73
36 TestAddons/parallel/RegistryCreds 0.68
37 TestAddons/parallel/Ingress 21.29
38 TestAddons/parallel/InspektorGadget 11.22
39 TestAddons/parallel/MetricsServer 6.82
41 TestAddons/parallel/CSI 39.57
42 TestAddons/parallel/Headlamp 23.44
43 TestAddons/parallel/CloudSpanner 5.64
44 TestAddons/parallel/LocalPath 57.35
45 TestAddons/parallel/NvidiaDevicePlugin 6.78
46 TestAddons/parallel/Yakd 12.17
48 TestAddons/StoppedEnableDisable 35.05
49 TestCertOptions 50.77
50 TestCertExpiration 287.83
52 TestForceSystemdFlag 49.26
53 TestForceSystemdEnv 45.6
58 TestErrorSpam/setup 36.13
59 TestErrorSpam/start 0.33
60 TestErrorSpam/status 0.72
61 TestErrorSpam/pause 1.54
62 TestErrorSpam/unpause 1.84
63 TestErrorSpam/stop 39.63
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 74.67
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 46.06
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.09
75 TestFunctional/serial/CacheCmd/cache/add_local 1.1
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.19
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.55
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 41.96
85 TestFunctional/serial/LogsCmd 1.33
86 TestFunctional/serial/LogsFileCmd 1.38
87 TestFunctional/serial/InvalidService 3.94
89 TestFunctional/parallel/ConfigCmd 0.43
90 TestFunctional/parallel/DashboardCmd 11.82
91 TestFunctional/parallel/DryRun 0.28
92 TestFunctional/parallel/InternationalLanguage 0.11
93 TestFunctional/parallel/StatusCmd 1
97 TestFunctional/parallel/ServiceCmdConnect 8.7
98 TestFunctional/parallel/AddonsCmd 0.17
99 TestFunctional/parallel/PersistentVolumeClaim 39.62
101 TestFunctional/parallel/SSHCmd 0.35
102 TestFunctional/parallel/CpCmd 1.35
103 TestFunctional/parallel/MySQL 32.43
104 TestFunctional/parallel/FileSync 0.22
105 TestFunctional/parallel/CertSync 1.29
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.36
113 TestFunctional/parallel/License 0.25
114 TestFunctional/parallel/ServiceCmd/DeployApp 8.23
115 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.31
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
128 TestFunctional/parallel/ProfileCmd/profile_list 0.44
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
130 TestFunctional/parallel/MountCmd/any-port 18.27
131 TestFunctional/parallel/ServiceCmd/List 0.31
132 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
133 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
134 TestFunctional/parallel/ServiceCmd/Format 0.41
135 TestFunctional/parallel/ServiceCmd/URL 0.42
136 TestFunctional/parallel/Version/short 0.07
137 TestFunctional/parallel/Version/components 0.99
138 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
139 TestFunctional/parallel/ImageCommands/ImageListTable 0.54
140 TestFunctional/parallel/ImageCommands/ImageListJson 0.39
141 TestFunctional/parallel/ImageCommands/ImageListYaml 0.35
142 TestFunctional/parallel/ImageCommands/ImageBuild 3.46
143 TestFunctional/parallel/ImageCommands/Setup 0.35
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 6.43
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.14
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.65
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.7
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
149 TestFunctional/parallel/MountCmd/specific-port 1.56
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.74
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.73
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.26
153 TestFunctional/delete_echo-server_images 0.07
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 182.02
161 TestMultiControlPlane/serial/DeployApp 5.13
162 TestMultiControlPlane/serial/PingHostFromPods 1.32
163 TestMultiControlPlane/serial/AddWorkerNode 41.51
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.75
166 TestMultiControlPlane/serial/CopyFile 11.03
167 TestMultiControlPlane/serial/StopSecondaryNode 35.52
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
169 TestMultiControlPlane/serial/RestartSecondaryNode 32.55
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.94
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 221.76
172 TestMultiControlPlane/serial/DeleteSecondaryNode 9.44
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.56
174 TestMultiControlPlane/serial/StopCluster 104.39
175 TestMultiControlPlane/serial/RestartCluster 92.17
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.57
177 TestMultiControlPlane/serial/AddSecondaryNode 63.58
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.77
183 TestJSONOutput/start/Command 83.91
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.76
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.66
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 37.47
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.24
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 73.43
215 TestMountStart/serial/StartWithMountFirst 23.37
216 TestMountStart/serial/VerifyMountFirst 0.32
217 TestMountStart/serial/StartWithMountSecond 20.58
218 TestMountStart/serial/VerifyMountSecond 0.31
219 TestMountStart/serial/DeleteFirst 0.7
220 TestMountStart/serial/VerifyMountPostDelete 0.3
221 TestMountStart/serial/Stop 1.27
222 TestMountStart/serial/RestartStopped 17.9
223 TestMountStart/serial/VerifyMountPostStop 0.31
226 TestMultiNode/serial/FreshStart2Nodes 93.94
227 TestMultiNode/serial/DeployApp2Nodes 4.22
228 TestMultiNode/serial/PingHostFrom2Pods 0.84
229 TestMultiNode/serial/AddNode 41.62
230 TestMultiNode/serial/MultiNodeLabels 0.07
231 TestMultiNode/serial/ProfileList 0.49
232 TestMultiNode/serial/CopyFile 6.1
233 TestMultiNode/serial/StopNode 2.25
234 TestMultiNode/serial/StartAfterStop 38.13
235 TestMultiNode/serial/RestartKeepsNodes 195.89
236 TestMultiNode/serial/DeleteNode 2.73
237 TestMultiNode/serial/StopMultiNode 68.52
238 TestMultiNode/serial/RestartMultiNode 86.67
239 TestMultiNode/serial/ValidateNameConflict 38.72
246 TestScheduledStopUnix 106.12
250 TestRunningBinaryUpgrade 457.17
252 TestKubernetesUpgrade 150.33
255 TestPreload/Start-NoPreload-PullImage 144.79
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
257 TestNoKubernetes/serial/StartWithK8s 74.59
258 TestNoKubernetes/serial/StartWithStopK8s 116.82
260 TestPause/serial/Start 167.99
261 TestPreload/Restart-With-Preload-Check-User-Image 96.73
262 TestNoKubernetes/serial/Start 43.69
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.17
265 TestNoKubernetes/serial/ProfileList 1.42
266 TestNoKubernetes/serial/Stop 1.35
267 TestNoKubernetes/serial/StartNoArgs 17.45
269 TestPause/serial/SecondStartNoReconfiguration 72.46
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
278 TestStoppedBinaryUpgrade/Setup 0.7
279 TestStoppedBinaryUpgrade/Upgrade 102.07
280 TestPause/serial/Pause 0.9
281 TestPause/serial/VerifyStatus 0.28
282 TestPause/serial/Unpause 0.85
283 TestPause/serial/PauseAgain 1.01
284 TestPause/serial/DeletePaused 0.92
285 TestPause/serial/VerifyDeletedResources 4.34
286 TestStoppedBinaryUpgrade/MinikubeLogs 2.19
287 TestISOImage/Setup 20.16
289 TestISOImage/Binaries/crictl 0.2
290 TestISOImage/Binaries/curl 0.19
291 TestISOImage/Binaries/docker 0.21
292 TestISOImage/Binaries/git 0.18
293 TestISOImage/Binaries/iptables 0.18
294 TestISOImage/Binaries/podman 0.17
295 TestISOImage/Binaries/rsync 0.19
296 TestISOImage/Binaries/socat 0.17
297 TestISOImage/Binaries/wget 0.19
298 TestISOImage/Binaries/VBoxControl 0.17
299 TestISOImage/Binaries/VBoxService 0.17
307 TestNetworkPlugins/group/false 4.18
311 TestPreload/PreloadSrc/gcs 3.76
312 TestPreload/PreloadSrc/github 4.92
313 TestPreload/PreloadSrc/gcs-cached 0.24
315 TestStartStop/group/old-k8s-version/serial/FirstStart 98.39
317 TestStartStop/group/no-preload/serial/FirstStart 117.4
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 107.47
320 TestStartStop/group/old-k8s-version/serial/DeployApp 9.32
321 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.13
322 TestStartStop/group/old-k8s-version/serial/Stop 33.66
323 TestStartStop/group/no-preload/serial/DeployApp 8.33
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.35
325 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
326 TestStartStop/group/no-preload/serial/Stop 37.46
327 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
328 TestStartStop/group/old-k8s-version/serial/SecondStart 49.43
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.08
330 TestStartStop/group/default-k8s-diff-port/serial/Stop 35.23
331 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
332 TestStartStop/group/no-preload/serial/SecondStart 53.27
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
334 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 66.14
335 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 17.01
336 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
337 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
338 TestStartStop/group/old-k8s-version/serial/Pause 3.87
340 TestStartStop/group/newest-cni/serial/FirstStart 46.13
341 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.01
342 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.34
343 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 11.01
344 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.36
345 TestStartStop/group/no-preload/serial/Pause 3.46
347 TestStartStop/group/embed-certs/serial/FirstStart 81.68
348 TestNetworkPlugins/group/auto/Start 103.71
349 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
350 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
351 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.51
352 TestStartStop/group/newest-cni/serial/DeployApp 0
353 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.27
354 TestStartStop/group/newest-cni/serial/Stop 35.98
355 TestNetworkPlugins/group/kindnet/Start 90.69
356 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.37
357 TestStartStop/group/newest-cni/serial/SecondStart 51.07
358 TestStartStop/group/embed-certs/serial/DeployApp 9.31
359 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.07
360 TestStartStop/group/embed-certs/serial/Stop 34.3
361 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
362 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
363 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
364 TestStartStop/group/newest-cni/serial/Pause 2.79
365 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
366 TestNetworkPlugins/group/calico/Start 98.76
367 TestNetworkPlugins/group/auto/KubeletFlags 0.18
368 TestNetworkPlugins/group/auto/NetCatPod 10.25
369 TestNetworkPlugins/group/kindnet/KubeletFlags 0.18
370 TestNetworkPlugins/group/kindnet/NetCatPod 11.23
371 TestNetworkPlugins/group/auto/DNS 0.16
372 TestNetworkPlugins/group/auto/Localhost 0.12
373 TestNetworkPlugins/group/auto/HairPin 0.14
374 TestNetworkPlugins/group/kindnet/DNS 0.16
375 TestNetworkPlugins/group/kindnet/Localhost 0.15
376 TestNetworkPlugins/group/kindnet/HairPin 0.14
377 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
378 TestStartStop/group/embed-certs/serial/SecondStart 50.42
379 TestNetworkPlugins/group/custom-flannel/Start 83.9
380 TestNetworkPlugins/group/flannel/Start 96.92
381 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 8.01
382 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
383 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
384 TestStartStop/group/embed-certs/serial/Pause 3.91
385 TestNetworkPlugins/group/enable-default-cni/Start 80.17
386 TestNetworkPlugins/group/calico/ControllerPod 6.01
387 TestNetworkPlugins/group/calico/KubeletFlags 0.2
388 TestNetworkPlugins/group/calico/NetCatPod 10.32
389 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
390 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.31
391 TestNetworkPlugins/group/calico/DNS 0.21
392 TestNetworkPlugins/group/calico/Localhost 0.18
393 TestNetworkPlugins/group/calico/HairPin 0.14
394 TestNetworkPlugins/group/custom-flannel/DNS 0.17
395 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
396 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
397 TestNetworkPlugins/group/flannel/ControllerPod 6.01
398 TestNetworkPlugins/group/bridge/Start 76.62
399 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
400 TestNetworkPlugins/group/flannel/NetCatPod 11.36
402 TestISOImage/PersistentMounts//data 0.19
403 TestISOImage/PersistentMounts//var/lib/docker 0.2
404 TestISOImage/PersistentMounts//var/lib/cni 0.2
405 TestISOImage/PersistentMounts//var/lib/kubelet 0.2
406 TestISOImage/PersistentMounts//var/lib/minikube 0.21
407 TestISOImage/PersistentMounts//var/lib/toolbox 0.21
408 TestISOImage/PersistentMounts//var/lib/boot2docker 0.22
409 TestISOImage/VersionJSON 0.2
410 TestISOImage/eBPFSupport 0.19
411 TestNetworkPlugins/group/flannel/DNS 0.15
412 TestNetworkPlugins/group/flannel/Localhost 0.12
413 TestNetworkPlugins/group/flannel/HairPin 0.11
414 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.19
415 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.28
416 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
417 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
418 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
419 TestNetworkPlugins/group/bridge/KubeletFlags 0.18
420 TestNetworkPlugins/group/bridge/NetCatPod 10.24
421 TestNetworkPlugins/group/bridge/DNS 0.13
422 TestNetworkPlugins/group/bridge/Localhost 0.12
423 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.28.0/json-events (6.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-743248 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-743248 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.474029642s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1227 19:55:06.439335   62937 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1227 19:55:06.439439   62937 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-59055/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-743248
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-743248: exit status 85 (74.907852ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-743248 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-743248 │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 19:55:00
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 19:55:00.018477   62948 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:55:00.018711   62948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:55:00.018720   62948 out.go:374] Setting ErrFile to fd 2...
	I1227 19:55:00.018740   62948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:55:00.018960   62948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-59055/.minikube/bin
	W1227 19:55:00.019102   62948 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22332-59055/.minikube/config/config.json: open /home/jenkins/minikube-integration/22332-59055/.minikube/config/config.json: no such file or directory
	I1227 19:55:00.019595   62948 out.go:368] Setting JSON to true
	I1227 19:55:00.020496   62948 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5850,"bootTime":1766859450,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 19:55:00.020564   62948 start.go:143] virtualization: kvm guest
	I1227 19:55:00.025250   62948 out.go:99] [download-only-743248] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1227 19:55:00.025441   62948 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22332-59055/.minikube/cache/preloaded-tarball: no such file or directory
	I1227 19:55:00.025500   62948 notify.go:221] Checking for updates...
	I1227 19:55:00.026713   62948 out.go:171] MINIKUBE_LOCATION=22332
	I1227 19:55:00.028674   62948 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 19:55:00.030216   62948 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22332-59055/kubeconfig
	I1227 19:55:00.031599   62948 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-59055/.minikube
	I1227 19:55:00.033021   62948 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1227 19:55:00.035477   62948 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1227 19:55:00.035768   62948 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 19:55:00.069993   62948 out.go:99] Using the kvm2 driver based on user configuration
	I1227 19:55:00.070035   62948 start.go:309] selected driver: kvm2
	I1227 19:55:00.070041   62948 start.go:928] validating driver "kvm2" against <nil>
	I1227 19:55:00.070378   62948 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 19:55:00.070937   62948 start_flags.go:417] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1227 19:55:00.071076   62948 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 19:55:00.071112   62948 cni.go:84] Creating CNI manager for ""
	I1227 19:55:00.071164   62948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1227 19:55:00.071175   62948 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1227 19:55:00.071223   62948 start.go:353] cluster config:
	{Name:download-only-743248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-743248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 19:55:00.071428   62948 iso.go:125] acquiring lock: {Name:mka43d70ce37123bef7d956775bb3b0726c5ddc8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 19:55:00.073181   62948 out.go:99] Downloading VM boot image ...
	I1227 19:55:00.073252   62948 download.go:114] Downloading: https://storage.googleapis.com/minikube-builds/iso/22332/minikube-v1.37.0-1766811082-22332-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22332/minikube-v1.37.0-1766811082-22332-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22332-59055/.minikube/cache/iso/amd64/minikube-v1.37.0-1766811082-22332-amd64.iso
	I1227 19:55:03.268302   62948 out.go:99] Starting "download-only-743248" primary control-plane node in "download-only-743248" cluster
	I1227 19:55:03.268368   62948 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 19:55:03.283788   62948 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1227 19:55:03.283824   62948 cache.go:65] Caching tarball of preloaded images
	I1227 19:55:03.283991   62948 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 19:55:03.285799   62948 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1227 19:55:03.285831   62948 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1227 19:55:03.285838   62948 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1227 19:55:03.303107   62948 preload.go:313] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1227 19:55:03.303236   62948 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22332-59055/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-743248 host does not exist
	  To start a cluster, run: "minikube start -p download-only-743248"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-743248
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (3.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-204629 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-204629 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.051172973s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (3.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I1227 19:55:09.868894   62937 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
I1227 19:55:09.868948   62937 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-59055/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-204629
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-204629: exit status 85 (73.387106ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-743248 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-743248 │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ delete  │ -p download-only-743248                                                                                                                                                 │ download-only-743248 │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ start   │ -o=json --download-only -p download-only-204629 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-204629 │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 19:55:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 19:55:06.872315   63143 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:55:06.872469   63143 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:55:06.872479   63143 out.go:374] Setting ErrFile to fd 2...
	I1227 19:55:06.872486   63143 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:55:06.872758   63143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-59055/.minikube/bin
	I1227 19:55:06.873272   63143 out.go:368] Setting JSON to true
	I1227 19:55:06.874095   63143 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5857,"bootTime":1766859450,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 19:55:06.874153   63143 start.go:143] virtualization: kvm guest
	I1227 19:55:06.876217   63143 out.go:99] [download-only-204629] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 19:55:06.876448   63143 notify.go:221] Checking for updates...
	I1227 19:55:06.877966   63143 out.go:171] MINIKUBE_LOCATION=22332
	I1227 19:55:06.879588   63143 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 19:55:06.881106   63143 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22332-59055/kubeconfig
	I1227 19:55:06.882573   63143 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-59055/.minikube
	I1227 19:55:06.883984   63143 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-204629 host does not exist
	  To start a cluster, run: "minikube start -p download-only-204629"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-204629
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1227 19:55:10.526195   62937 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-525893 --alsologtostderr --binary-mirror http://127.0.0.1:45387 --driver=kvm2  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-525893" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-525893
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (76.34s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-964145 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-964145 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m15.433373134s)
helpers_test.go:176: Cleaning up "offline-crio-964145" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-964145
--- PASS: TestOffline (76.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-099251
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-099251: exit status 85 (65.504532ms)

                                                
                                                
-- stdout --
	* Profile "addons-099251" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-099251"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-099251
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-099251: exit status 85 (65.416453ms)

                                                
                                                
-- stdout --
	* Profile "addons-099251" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-099251"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (124.32s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-099251 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-099251 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m4.318770281s)
--- PASS: TestAddons/Setup (124.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-099251 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-099251 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-099251 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-099251 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [4ec60349-3d85-4afb-95e8-c3efe6945a26] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [4ec60349-3d85-4afb-95e8-c3efe6945a26] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004885184s
addons_test.go:696: (dbg) Run:  kubectl --context addons-099251 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-099251 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-099251 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 7.896912ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-lb5k4" [44e41317-9157-48e9-8e0d-398a7486d1cf] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.009751473s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-rlbzm" [1d5b9049-5d31-4bf5-82f7-24eb68482cf4] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.006001353s
addons_test.go:394: (dbg) Run:  kubectl --context addons-099251 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-099251 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-099251 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.922103322s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-099251 ip
2025/12/27 19:57:49 [DEBUG] GET http://192.168.39.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-099251 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.73s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.68s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 4.929294ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-099251
addons_test.go:334: (dbg) Run:  kubectl --context addons-099251 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-099251 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.68s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-099251 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-099251 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-099251 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [6767ba9d-5f00-4b70-8876-32746479b7f2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [6767ba9d-5f00-4b70-8876-32746479b7f2] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004601351s
I1227 19:58:00.259811   62937 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-099251 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-099251 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-099251 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.39.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-099251 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-099251 addons disable ingress-dns --alsologtostderr -v=1: (2.302971274s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-099251 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-099251 addons disable ingress --alsologtostderr -v=1: (7.826899368s)
--- PASS: TestAddons/parallel/Ingress (21.29s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.22s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-x25z2" [54eb1b8a-1e74-40a2-b4c9-0b16bb025ebe] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006555621s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-099251 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-099251 addons disable inspektor-gadget --alsologtostderr -v=1: (6.211127835s)
--- PASS: TestAddons/parallel/InspektorGadget (11.22s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.82s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 16.671843ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-7bhpp" [29eabf86-b4e9-4b99-9b8b-43f621f50580] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003867061s
addons_test.go:465: (dbg) Run:  kubectl --context addons-099251 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-099251 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.82s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1227 19:57:46.595753   62937 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1227 19:57:46.602629   62937 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1227 19:57:46.602657   62937 kapi.go:107] duration metric: took 6.954285ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 6.965462ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-099251 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-099251 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-099251 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-099251 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-099251 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-099251 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-099251 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-099251 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-099251 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [afcb5944-169f-42ca-b194-7c27c772cabc] Pending
helpers_test.go:353: "task-pv-pod" [afcb5944-169f-42ca-b194-7c27c772cabc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [afcb5944-169f-42ca-b194-7c27c772cabc] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.00427405s
addons_test.go:574: (dbg) Run:  kubectl --context addons-099251 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-099251 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-099251 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-099251 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-099251 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-099251 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-099251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-099251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-099251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-099251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-099251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-099251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-099251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-099251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-099251 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [5beabba1-fbd0-4b52-aba5-80f57ec54d42] Pending
helpers_test.go:353: "task-pv-pod-restore" [5beabba1-fbd0-4b52-aba5-80f57ec54d42] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [5beabba1-fbd0-4b52-aba5-80f57ec54d42] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004609162s
addons_test.go:616: (dbg) Run:  kubectl --context addons-099251 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-099251 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-099251 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-099251 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-099251 addons disable volumesnapshots --alsologtostderr -v=1: (1.106119738s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-099251 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-099251 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.147043072s)
--- PASS: TestAddons/parallel/CSI (39.57s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (23.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-099251 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-099251 --alsologtostderr -v=1: (1.296493104s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-6d8d595f-5lrdv" [cc379c62-50ce-4e53-93be-cae40f2455ea] Pending
helpers_test.go:353: "headlamp-6d8d595f-5lrdv" [cc379c62-50ce-4e53-93be-cae40f2455ea] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-6d8d595f-5lrdv" [cc379c62-50ce-4e53-93be-cae40f2455ea] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.063539644s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-099251 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-099251 addons disable headlamp --alsologtostderr -v=1: (6.08184593s)
--- PASS: TestAddons/parallel/Headlamp (23.44s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-f97nz" [1deff9b5-62c7-48e9-a886-9ec7fa70f95e] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003897785s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-099251 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.35s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-099251 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-099251 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-099251 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-099251 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-099251 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-099251 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-099251 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-099251 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-099251 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-099251 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-099251 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [55c207da-0378-4999-9b8c-ea8614f56cfb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [55c207da-0378-4999-9b8c-ea8614f56cfb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [55c207da-0378-4999-9b8c-ea8614f56cfb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.008248532s
addons_test.go:969: (dbg) Run:  kubectl --context addons-099251 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-099251 ssh "cat /opt/local-path-provisioner/pvc-95b08b78-c394-4a04-859c-3634ac1c5865_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-099251 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-099251 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-099251 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-099251 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.445842959s)
--- PASS: TestAddons/parallel/LocalPath (57.35s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.78s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-kx9zd" [89e08d24-4d97-40e2-bd62-49d4ababde5c] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005387368s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-099251 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.78s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-865bfb49b9-j95mh" [04f03b35-ef16-48af-a282-e7bbdae3292e] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004140785s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-099251 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-099251 addons disable yakd --alsologtostderr -v=1: (6.160639114s)
--- PASS: TestAddons/parallel/Yakd (12.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (35.05s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-099251
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-099251: (34.845345329s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-099251
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-099251
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-099251
--- PASS: TestAddons/StoppedEnableDisable (35.05s)

                                                
                                    
x
+
TestCertOptions (50.77s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-941754 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-941754 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (49.475250198s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-941754 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-941754 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-941754 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-941754" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-941754
--- PASS: TestCertOptions (50.77s)

                                                
                                    
x
+
TestCertExpiration (287.83s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-501437 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-501437 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m5.921501058s)
E1227 20:41:27.950174   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-501437 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-501437 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (40.944897948s)
helpers_test.go:176: Cleaning up "cert-expiration-501437" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-501437
--- PASS: TestCertExpiration (287.83s)

                                                
                                    
x
+
TestForceSystemdFlag (49.26s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-252781 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-252781 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (48.159198988s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-252781 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-252781" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-252781
--- PASS: TestForceSystemdFlag (49.26s)

                                                
                                    
x
+
TestForceSystemdEnv (45.6s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-774096 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-774096 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (44.737303786s)
helpers_test.go:176: Cleaning up "force-systemd-env-774096" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-774096
--- PASS: TestForceSystemdEnv (45.60s)

                                                
                                    
x
+
TestErrorSpam/setup (36.13s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-067430 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-067430 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-067430 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-067430 --driver=kvm2  --container-runtime=crio: (36.132150835s)
--- PASS: TestErrorSpam/setup (36.13s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-067430 --log_dir /tmp/nospam-067430 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-067430 --log_dir /tmp/nospam-067430 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-067430 --log_dir /tmp/nospam-067430 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-067430 --log_dir /tmp/nospam-067430 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-067430 --log_dir /tmp/nospam-067430 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-067430 --log_dir /tmp/nospam-067430 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-067430 --log_dir /tmp/nospam-067430 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-067430 --log_dir /tmp/nospam-067430 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-067430 --log_dir /tmp/nospam-067430 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-067430 --log_dir /tmp/nospam-067430 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-067430 --log_dir /tmp/nospam-067430 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-067430 --log_dir /tmp/nospam-067430 unpause
--- PASS: TestErrorSpam/unpause (1.84s)

                                                
                                    
x
+
TestErrorSpam/stop (39.63s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-067430 --log_dir /tmp/nospam-067430 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-067430 --log_dir /tmp/nospam-067430 stop: (35.702599327s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-067430 --log_dir /tmp/nospam-067430 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-067430 --log_dir /tmp/nospam-067430 stop: (2.06359852s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-067430 --log_dir /tmp/nospam-067430 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-067430 --log_dir /tmp/nospam-067430 stop: (1.860581971s)
--- PASS: TestErrorSpam/stop (39.63s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22332-59055/.minikube/files/etc/test/nested/copy/62937/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (74.67s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-amd64 start -p functional-866869 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2244: (dbg) Done: out/minikube-linux-amd64 start -p functional-866869 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m14.671380009s)
--- PASS: TestFunctional/serial/StartWithProxy (74.67s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (46.06s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1227 20:01:41.501491   62937 config.go:182] Loaded profile config "functional-866869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-866869 --alsologtostderr -v=8
E1227 20:02:16.303944   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:02:16.309356   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:02:16.319756   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:02:16.340130   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:02:16.380515   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:02:16.460951   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:02:16.622109   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:02:16.942983   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:02:17.583316   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:02:18.864535   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:02:21.425786   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:02:26.546511   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-866869 --alsologtostderr -v=8: (46.057872158s)
functional_test.go:678: soft start took 46.05848804s for "functional-866869" cluster.
I1227 20:02:27.559789   62937 config.go:182] Loaded profile config "functional-866869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (46.06s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-866869 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Done: out/minikube-linux-amd64 -p functional-866869 cache add registry.k8s.io/pause:3.3: (1.069701637s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 cache add registry.k8s.io/pause:latest
functional_test.go:1069: (dbg) Done: out/minikube-linux-amd64 -p functional-866869 cache add registry.k8s.io/pause:latest: (1.036875263s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-866869 /tmp/TestFunctionalserialCacheCmdcacheadd_local2265010442/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 cache add minikube-local-cache-test:functional-866869
functional_test.go:1114: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 cache delete minikube-local-cache-test:functional-866869
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-866869
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-866869 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (177.762484ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 kubectl -- --context functional-866869 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-866869 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.96s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-866869 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1227 20:02:36.787647   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:02:57.268201   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-866869 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.957870123s)
functional_test.go:776: restart took 41.958008727s for "functional-866869" cluster.
I1227 20:03:16.061051   62937 config.go:182] Loaded profile config "functional-866869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (41.96s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-amd64 -p functional-866869 logs: (1.328087452s)
--- PASS: TestFunctional/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 logs --file /tmp/TestFunctionalserialLogsFileCmd3922972002/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-amd64 -p functional-866869 logs --file /tmp/TestFunctionalserialLogsFileCmd3922972002/001/logs.txt: (1.379980337s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.94s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-866869 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-866869
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-866869: exit status 115 (266.561942ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.233:32365 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-866869 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-866869 config get cpus: exit status 14 (72.206427ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-866869 config get cpus: exit status 14 (62.980663ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-866869 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-866869 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 68566: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.82s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-amd64 start -p functional-866869 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-866869 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (160.692001ms)

                                                
                                                
-- stdout --
	* [functional-866869] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-59055/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-59055/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:03:48.640833   68351 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:03:48.641062   68351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:03:48.641074   68351 out.go:374] Setting ErrFile to fd 2...
	I1227 20:03:48.641081   68351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:03:48.641449   68351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-59055/.minikube/bin
	I1227 20:03:48.642100   68351 out.go:368] Setting JSON to false
	I1227 20:03:48.643429   68351 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6379,"bootTime":1766859450,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 20:03:48.643530   68351 start.go:143] virtualization: kvm guest
	I1227 20:03:48.646338   68351 out.go:179] * [functional-866869] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 20:03:48.647805   68351 notify.go:221] Checking for updates...
	I1227 20:03:48.648257   68351 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:03:48.649757   68351 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:03:48.651049   68351 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-59055/kubeconfig
	I1227 20:03:48.653128   68351 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-59055/.minikube
	I1227 20:03:48.654584   68351 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 20:03:48.656388   68351 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:03:48.659232   68351 config.go:182] Loaded profile config "functional-866869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:03:48.660060   68351 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:03:48.719080   68351 out.go:179] * Using the kvm2 driver based on existing profile
	I1227 20:03:48.721252   68351 start.go:309] selected driver: kvm2
	I1227 20:03:48.721272   68351 start.go:928] validating driver "kvm2" against &{Name:functional-866869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22332/minikube-v1.37.0-1766811082-22332-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0 ClusterName:functional-866869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:03:48.721416   68351 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:03:48.723869   68351 out.go:203] 
	W1227 20:03:48.725251   68351 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1227 20:03:48.726549   68351 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 start -p functional-866869 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 start -p functional-866869 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-866869 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (111.888788ms)

                                                
                                                
-- stdout --
	* [functional-866869] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-59055/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-59055/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:03:48.906388   68421 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:03:48.906503   68421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:03:48.906524   68421 out.go:374] Setting ErrFile to fd 2...
	I1227 20:03:48.906535   68421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:03:48.906861   68421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-59055/.minikube/bin
	I1227 20:03:48.907336   68421 out.go:368] Setting JSON to false
	I1227 20:03:48.908148   68421 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6379,"bootTime":1766859450,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 20:03:48.908213   68421 start.go:143] virtualization: kvm guest
	I1227 20:03:48.909747   68421 out.go:179] * [functional-866869] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1227 20:03:48.911094   68421 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:03:48.911103   68421 notify.go:221] Checking for updates...
	I1227 20:03:48.913194   68421 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:03:48.914278   68421 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-59055/kubeconfig
	I1227 20:03:48.915310   68421 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-59055/.minikube
	I1227 20:03:48.916387   68421 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 20:03:48.917379   68421 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:03:48.918839   68421 config.go:182] Loaded profile config "functional-866869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:03:48.919469   68421 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:03:48.951704   68421 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1227 20:03:48.952762   68421 start.go:309] selected driver: kvm2
	I1227 20:03:48.952782   68421 start.go:928] validating driver "kvm2" against &{Name:functional-866869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22332/minikube-v1.37.0-1766811082-22332-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0 ClusterName:functional-866869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:03:48.952903   68421 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:03:48.955006   68421 out.go:203] 
	W1227 20:03:48.956111   68421 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1227 20:03:48.957088   68421 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-866869 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-866869 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-j5nrd" [62c6c277-3d2e-4cfe-94b3-f4ef325d9635] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-j5nrd" [62c6c277-3d2e-4cfe-94b3-f4ef325d9635] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.005021873s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.39.233:30442
functional_test.go:1685: http://192.168.39.233:30442: success! body:
Request served by hello-node-connect-5d95464fd4-j5nrd

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.233:30442
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (39.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [94564fa5-0f62-4f00-ad0e-7132cadb6b58] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.145479947s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-866869 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-866869 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-866869 get pvc myclaim -o=json
I1227 20:03:32.414361   62937 retry.go:84] will retry after 1.7s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:d82267f0-ad40-4080-94cc-394482bf551f ResourceVersion:774 Generation:0 CreationTimestamp:2025-12-27 20:03:32 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001c0b090 VolumeMode:0xc001c0b0a0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-866869 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-866869 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [dc3affb8-ce25-4674-9a85-a55daf89f791] Pending
helpers_test.go:353: "sp-pod" [dc3affb8-ce25-4674-9a85-a55daf89f791] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [dc3affb8-ce25-4674-9a85-a55daf89f791] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.004175169s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-866869 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-866869 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-866869 apply -f testdata/storage-provisioner/pod.yaml
I1227 20:03:54.448554   62937 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [d40c5938-604b-4df0-878b-12f5ef19fa1d] Pending
helpers_test.go:353: "sp-pod" [d40c5938-604b-4df0-878b-12f5ef19fa1d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [d40c5938-604b-4df0-878b-12f5ef19fa1d] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004400966s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-866869 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (39.62s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh -n functional-866869 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 cp functional-866869:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1330007186/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh -n functional-866869 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh -n functional-866869 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1803: (dbg) Run:  kubectl --context functional-866869 replace --force -f testdata/mysql.yaml
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-j96wr" [89f89ab1-4316-4580-8ba6-c39e265492d6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-j96wr" [89f89ab1-4316-4580-8ba6-c39e265492d6] Running
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.011893092s
functional_test.go:1817: (dbg) Run:  kubectl --context functional-866869 exec mysql-7d7b65bc95-j96wr -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-866869 exec mysql-7d7b65bc95-j96wr -- mysql -ppassword -e "show databases;": exit status 1 (184.997515ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-866869 exec mysql-7d7b65bc95-j96wr -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-866869 exec mysql-7d7b65bc95-j96wr -- mysql -ppassword -e "show databases;": exit status 1 (234.582095ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-866869 exec mysql-7d7b65bc95-j96wr -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-866869 exec mysql-7d7b65bc95-j96wr -- mysql -ppassword -e "show databases;": exit status 1 (196.477223ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-866869 exec mysql-7d7b65bc95-j96wr -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-866869 exec mysql-7d7b65bc95-j96wr -- mysql -ppassword -e "show databases;": exit status 1 (195.943123ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-866869 exec mysql-7d7b65bc95-j96wr -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/62937/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh "sudo cat /etc/test/nested/copy/62937/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/62937.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh "sudo cat /etc/ssl/certs/62937.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/62937.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh "sudo cat /usr/share/ca-certificates/62937.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/629372.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh "sudo cat /etc/ssl/certs/629372.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/629372.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh "sudo cat /usr/share/ca-certificates/629372.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-866869 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh "sudo systemctl is-active docker"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-866869 ssh "sudo systemctl is-active docker": exit status 1 (179.373627ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh "sudo systemctl is-active containerd"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-866869 ssh "sudo systemctl is-active containerd": exit status 1 (183.063948ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-866869 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-866869 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-vvr5v" [959c0c47-e62c-4d16-b712-9347b1b26d7b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-vvr5v" [959c0c47-e62c-4d16-b712-9347b1b26d7b] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.007441212s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1335: Took "371.652483ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1349: Took "70.023862ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1386: Took "390.483913ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1399: Took "77.478602ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (18.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-866869 /tmp/TestFunctionalparallelMountCmdany-port1972099335/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766865808786587604" to /tmp/TestFunctionalparallelMountCmdany-port1972099335/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766865808786587604" to /tmp/TestFunctionalparallelMountCmdany-port1972099335/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766865808786587604" to /tmp/TestFunctionalparallelMountCmdany-port1972099335/001/test-1766865808786587604
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-866869 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (193.146971ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1227 20:03:28.980166   62937 retry.go:84] will retry after 600ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 27 20:03 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 27 20:03 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 27 20:03 test-1766865808786587604
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh cat /mount-9p/test-1766865808786587604
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-866869 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [ef41a7b2-0388-454b-8fe0-6e52621d7e76] Pending
helpers_test.go:353: "busybox-mount" [ef41a7b2-0388-454b-8fe0-6e52621d7e76] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [ef41a7b2-0388-454b-8fe0-6e52621d7e76] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [ef41a7b2-0388-454b-8fe0-6e52621d7e76] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 16.006079991s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-866869 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-866869 /tmp/TestFunctionalparallelMountCmdany-port1972099335/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (18.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 service list -o json
functional_test.go:1509: Took "328.022159ms" to run "out/minikube-linux-amd64 -p functional-866869 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.39.233:30868
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 service hello-node --url
I1227 20:03:34.378485   62937 detect.go:223] nested VM detected
functional_test.go:1580: found endpoint for hello-node: http://192.168.39.233:30868
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-866869 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-866869
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866869
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-866869 image ls --format short --alsologtostderr:
I1227 20:03:58.197334   68723 out.go:360] Setting OutFile to fd 1 ...
I1227 20:03:58.197636   68723 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:03:58.197650   68723 out.go:374] Setting ErrFile to fd 2...
I1227 20:03:58.197655   68723 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:03:58.197881   68723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-59055/.minikube/bin
I1227 20:03:58.198475   68723 config.go:182] Loaded profile config "functional-866869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:03:58.198570   68723 config.go:182] Loaded profile config "functional-866869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:03:58.200947   68723 ssh_runner.go:195] Run: systemctl --version
I1227 20:03:58.203786   68723 main.go:144] libmachine: domain functional-866869 has defined MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
I1227 20:03:58.204328   68723 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:61:ec:a4", ip: ""} in network mk-functional-866869: {Iface:virbr1 ExpiryTime:2025-12-27 21:00:42 +0000 UTC Type:0 Mac:52:54:00:61:ec:a4 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:functional-866869 Clientid:01:52:54:00:61:ec:a4}
I1227 20:03:58.204358   68723 main.go:144] libmachine: domain functional-866869 has defined IP address 192.168.39.233 and MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
I1227 20:03:58.204543   68723 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22332-59055/.minikube/machines/functional-866869/id_rsa Username:docker}
I1227 20:03:58.331228   68723 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-866869 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                       IMAGE                       │        TAG         │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-866869  │ 9056ab77afb8e │ 4.94MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest             │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0            │ 550794e3b12ac │ 52.8MB │
│ registry.k8s.io/pause                             │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                             │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                 │ 6e38f40d628db │ 31.5MB │
│ public.ecr.aws/docker/library/mysql               │ 8.4                │ 5e3dcc4ab5604 │ 804MB  │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0            │ 5c6acd67e9cd1 │ 90.8MB │
│ registry.k8s.io/kube-proxy                        │ v1.35.0            │ 32652ff1bbe6b │ 72MB   │
│ registry.k8s.io/pause                             │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test               │ functional-866869  │ 6caa31f61a7ae │ 3.33kB │
│ public.ecr.aws/nginx/nginx                        │ alpine             │ 04da2b0513cd7 │ 55.2MB │
│ registry.k8s.io/pause                             │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/etcd                              │ 3.6.6-0            │ 0a108f7189562 │ 63.6MB │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0            │ 2c9a4b058bd7e │ 76.9MB │
└───────────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-866869 image ls --format table --alsologtostderr:
I1227 20:03:59.504927   68775 out.go:360] Setting OutFile to fd 1 ...
I1227 20:03:59.505268   68775 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:03:59.505284   68775 out.go:374] Setting ErrFile to fd 2...
I1227 20:03:59.505291   68775 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:03:59.505619   68775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-59055/.minikube/bin
I1227 20:03:59.506518   68775 config.go:182] Loaded profile config "functional-866869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:03:59.506695   68775 config.go:182] Loaded profile config "functional-866869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:03:59.509293   68775 ssh_runner.go:195] Run: systemctl --version
I1227 20:03:59.511809   68775 main.go:144] libmachine: domain functional-866869 has defined MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
I1227 20:03:59.512318   68775 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:61:ec:a4", ip: ""} in network mk-functional-866869: {Iface:virbr1 ExpiryTime:2025-12-27 21:00:42 +0000 UTC Type:0 Mac:52:54:00:61:ec:a4 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:functional-866869 Clientid:01:52:54:00:61:ec:a4}
I1227 20:03:59.512356   68775 main.go:144] libmachine: domain functional-866869 has defined IP address 192.168.39.233 and MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
I1227 20:03:59.512523   68775 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22332-59055/.minikube/machines/functional-866869/id_rsa Username:docker}
I1227 20:03:59.645572   68775 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
2025/12/27 20:04:01 [DEBUG] GET http://127.0.0.1:41437/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-866869 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998","gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"5e3dcc4ab5604ab9bdf1054833d4f0ac396465de830ccac42d4f59131db9ba23","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:eaf64e87ae0d1136d46405ad56c9010de509fd5b949b9c8ede45c94f47804d21","public.ecr.aws/docker/library/mysql@sha256:1f5b0aca09cfa06d9a7b89b28d349c1e01
ba0d31339a4786fbcb3d5927070130"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803760263"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":["registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a","registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"63582405"},{"id":"04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5","repoDigests":["public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c","public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55157106"},{"id":"5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499","repoDigests":["registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3","registry.k8s.io/kube-
apiserver@sha256:50e01ce089b6b6508e2f68ba0da943a3bc4134596e7e2afaac27dd26f71aca7a"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"90844140"},{"id":"2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111","registry.k8s.io/kube-controller-manager@sha256:e0ce4c7d278a001734bbd8020ed1b7e535ae9d2412c700032eb3df190ea91a62"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"76893520"},{"id":"32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8","repoDigests":["registry.k8s.io/kube-proxy@sha256:ad87ae17f92f26144bd5a35fc86a73f2fae6effd1666db51bc03f8e9213de532","registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"71986585"},{"id":"550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc","repoDigests":["registry.k8s.io/kub
e-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f","registry.k8s.io/kube-scheduler@sha256:dd2b6a420b171e83748166a66372f43384b3142fc4f6f56a6240a9e152cccd69"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"52763986"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029","docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"249229937"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDig
ests":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866869","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4944818"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisione
r@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"6caa31f61a7ae0d61cd47fe004bc8b8f3a7a2c564dadf19fc26726674443fd2f","repoDigests":["localhost/minikube-local-cache-test@sha256:318b93ac01cdafa6378f955d9c1df9ae4041f7ebc517f5bdba390ddf2668838a"],"repoTags":["localhost/minikube-local-cache-test:functional-866869"],"size":"3330"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io
/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-866869 image ls --format json --alsologtostderr:
I1227 20:03:59.120000   68755 out.go:360] Setting OutFile to fd 1 ...
I1227 20:03:59.120159   68755 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:03:59.120172   68755 out.go:374] Setting ErrFile to fd 2...
I1227 20:03:59.120179   68755 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:03:59.120414   68755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-59055/.minikube/bin
I1227 20:03:59.121261   68755 config.go:182] Loaded profile config "functional-866869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:03:59.121402   68755 config.go:182] Loaded profile config "functional-866869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:03:59.124244   68755 ssh_runner.go:195] Run: systemctl --version
I1227 20:03:59.129109   68755 main.go:144] libmachine: domain functional-866869 has defined MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
I1227 20:03:59.129903   68755 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:61:ec:a4", ip: ""} in network mk-functional-866869: {Iface:virbr1 ExpiryTime:2025-12-27 21:00:42 +0000 UTC Type:0 Mac:52:54:00:61:ec:a4 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:functional-866869 Clientid:01:52:54:00:61:ec:a4}
I1227 20:03:59.129950   68755 main.go:144] libmachine: domain functional-866869 has defined IP address 192.168.39.233 and MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
I1227 20:03:59.130116   68755 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22332-59055/.minikube/machines/functional-866869/id_rsa Username:docker}
I1227 20:03:59.282225   68755 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-866869 image ls --format yaml --alsologtostderr:
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: 32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8
repoDigests:
- registry.k8s.io/kube-proxy@sha256:ad87ae17f92f26144bd5a35fc86a73f2fae6effd1666db51bc03f8e9213de532
- registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "71986585"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866869
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4944818"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests:
- registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "63582405"
- id: 2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111
- registry.k8s.io/kube-controller-manager@sha256:e0ce4c7d278a001734bbd8020ed1b7e535ae9d2412c700032eb3df190ea91a62
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "76893520"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5e3dcc4ab5604ab9bdf1054833d4f0ac396465de830ccac42d4f59131db9ba23
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:eaf64e87ae0d1136d46405ad56c9010de509fd5b949b9c8ede45c94f47804d21
- public.ecr.aws/docker/library/mysql@sha256:1f5b0aca09cfa06d9a7b89b28d349c1e01ba0d31339a4786fbcb3d5927070130
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803760263"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 6caa31f61a7ae0d61cd47fe004bc8b8f3a7a2c564dadf19fc26726674443fd2f
repoDigests:
- localhost/minikube-local-cache-test@sha256:318b93ac01cdafa6378f955d9c1df9ae4041f7ebc517f5bdba390ddf2668838a
repoTags:
- localhost/minikube-local-cache-test:functional-866869
size: "3330"
- id: 5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3
- registry.k8s.io/kube-apiserver@sha256:50e01ce089b6b6508e2f68ba0da943a3bc4134596e7e2afaac27dd26f71aca7a
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "90844140"
- id: 550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f
- registry.k8s.io/kube-scheduler@sha256:dd2b6a420b171e83748166a66372f43384b3142fc4f6f56a6240a9e152cccd69
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "52763986"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55157106"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-866869 image ls --format yaml --alsologtostderr:
I1227 20:03:58.520226   68734 out.go:360] Setting OutFile to fd 1 ...
I1227 20:03:58.520549   68734 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:03:58.520561   68734 out.go:374] Setting ErrFile to fd 2...
I1227 20:03:58.520566   68734 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:03:58.520820   68734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-59055/.minikube/bin
I1227 20:03:58.521474   68734 config.go:182] Loaded profile config "functional-866869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:03:58.521591   68734 config.go:182] Loaded profile config "functional-866869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:03:58.523678   68734 ssh_runner.go:195] Run: systemctl --version
I1227 20:03:58.525962   68734 main.go:144] libmachine: domain functional-866869 has defined MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
I1227 20:03:58.526461   68734 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:61:ec:a4", ip: ""} in network mk-functional-866869: {Iface:virbr1 ExpiryTime:2025-12-27 21:00:42 +0000 UTC Type:0 Mac:52:54:00:61:ec:a4 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:functional-866869 Clientid:01:52:54:00:61:ec:a4}
I1227 20:03:58.526494   68734 main.go:144] libmachine: domain functional-866869 has defined IP address 192.168.39.233 and MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
I1227 20:03:58.526674   68734 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22332-59055/.minikube/machines/functional-866869/id_rsa Username:docker}
I1227 20:03:58.698904   68734 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-866869 ssh pgrep buildkitd: exit status 1 (270.953737ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 image build -t localhost/my-image:functional-866869 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-866869 image build -t localhost/my-image:functional-866869 testdata/build --alsologtostderr: (2.986541552s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-866869 image build -t localhost/my-image:functional-866869 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d761a91a3dc
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-866869
--> 7ef73145029
Successfully tagged localhost/my-image:functional-866869
7ef73145029867d1dad8f11a1c53b9872a184af2c0d2dd20ccb5395182aa8fa8
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-866869 image build -t localhost/my-image:functional-866869 testdata/build --alsologtostderr:
I1227 20:03:59.150148   68764 out.go:360] Setting OutFile to fd 1 ...
I1227 20:03:59.150467   68764 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:03:59.150478   68764 out.go:374] Setting ErrFile to fd 2...
I1227 20:03:59.150482   68764 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:03:59.150670   68764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-59055/.minikube/bin
I1227 20:03:59.151260   68764 config.go:182] Loaded profile config "functional-866869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:03:59.152186   68764 config.go:182] Loaded profile config "functional-866869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:03:59.155036   68764 ssh_runner.go:195] Run: systemctl --version
I1227 20:03:59.157914   68764 main.go:144] libmachine: domain functional-866869 has defined MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
I1227 20:03:59.158447   68764 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:61:ec:a4", ip: ""} in network mk-functional-866869: {Iface:virbr1 ExpiryTime:2025-12-27 21:00:42 +0000 UTC Type:0 Mac:52:54:00:61:ec:a4 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:functional-866869 Clientid:01:52:54:00:61:ec:a4}
I1227 20:03:59.158488   68764 main.go:144] libmachine: domain functional-866869 has defined IP address 192.168.39.233 and MAC address 52:54:00:61:ec:a4 in network mk-functional-866869
I1227 20:03:59.158670   68764 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22332-59055/.minikube/machines/functional-866869/id_rsa Username:docker}
I1227 20:03:59.306587   68764 build_images.go:162] Building image from path: /tmp/build.2376340231.tar
I1227 20:03:59.306663   68764 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1227 20:03:59.365290   68764 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2376340231.tar
I1227 20:03:59.382223   68764 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2376340231.tar: stat -c "%s %y" /var/lib/minikube/build/build.2376340231.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2376340231.tar': No such file or directory
I1227 20:03:59.382282   68764 ssh_runner.go:362] scp /tmp/build.2376340231.tar --> /var/lib/minikube/build/build.2376340231.tar (3072 bytes)
I1227 20:03:59.464458   68764 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2376340231
I1227 20:03:59.501032   68764 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2376340231 -xf /var/lib/minikube/build/build.2376340231.tar
I1227 20:03:59.541227   68764 crio.go:315] Building image: /var/lib/minikube/build/build.2376340231
I1227 20:03:59.541306   68764 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-866869 /var/lib/minikube/build/build.2376340231 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1227 20:04:02.029281   68764 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-866869 /var/lib/minikube/build/build.2376340231 --cgroup-manager=cgroupfs: (2.487943493s)
I1227 20:04:02.029360   68764 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2376340231
I1227 20:04:02.044599   68764 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2376340231.tar
I1227 20:04:02.058387   68764 build_images.go:218] Built localhost/my-image:functional-866869 from /tmp/build.2376340231.tar
I1227 20:04:02.058430   68764 build_images.go:134] succeeded building to: functional-866869
I1227 20:04:02.058436   68764 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866869
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866869 --alsologtostderr
E1227 20:03:38.229115   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-866869 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866869 --alsologtostderr: (6.20867211s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866869 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866869
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866869 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-866869 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866869 --alsologtostderr: (1.882988906s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866869 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866869 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-866869 /tmp/TestFunctionalparallelMountCmdspecific-port144284019/001:/mount-9p --alsologtostderr -v=1 --port 33741]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-866869 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (221.394792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1227 20:03:47.282672   62937 retry.go:84] will retry after 600ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-866869 /tmp/TestFunctionalparallelMountCmdspecific-port144284019/001:/mount-9p --alsologtostderr -v=1 --port 33741] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-866869 ssh "sudo umount -f /mount-9p": exit status 1 (182.5466ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-866869 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-866869 /tmp/TestFunctionalparallelMountCmdspecific-port144284019/001:/mount-9p --alsologtostderr -v=1 --port 33741] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866869
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866869 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866869
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-866869 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3670188843/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-866869 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3670188843/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-866869 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3670188843/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-866869 ssh "findmnt -T" /mount1: exit status 1 (217.432656ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-866869 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-866869 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-866869 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3670188843/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-866869 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3670188843/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-866869 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3670188843/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.26s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866869
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-866869
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-866869
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (182.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1227 20:05:00.150699   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-949780 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m1.400211207s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (182.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-949780 kubectl -- rollout status deployment/busybox: (2.898069885s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 kubectl -- exec busybox-769dd8b7dd-fm25h -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 kubectl -- exec busybox-769dd8b7dd-ltv68 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 kubectl -- exec busybox-769dd8b7dd-q5prh -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 kubectl -- exec busybox-769dd8b7dd-fm25h -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 kubectl -- exec busybox-769dd8b7dd-ltv68 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 kubectl -- exec busybox-769dd8b7dd-q5prh -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 kubectl -- exec busybox-769dd8b7dd-fm25h -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 kubectl -- exec busybox-769dd8b7dd-ltv68 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 kubectl -- exec busybox-769dd8b7dd-q5prh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 kubectl -- exec busybox-769dd8b7dd-fm25h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 kubectl -- exec busybox-769dd8b7dd-fm25h -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 kubectl -- exec busybox-769dd8b7dd-ltv68 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 kubectl -- exec busybox-769dd8b7dd-ltv68 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 kubectl -- exec busybox-769dd8b7dd-q5prh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 kubectl -- exec busybox-769dd8b7dd-q5prh -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (41.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 node add --alsologtostderr -v 5
E1227 20:07:16.303317   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:07:43.991905   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-949780 node add --alsologtostderr -v 5: (40.750620196s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (41.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-949780 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (11.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 cp testdata/cp-test.txt ha-949780:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 cp ha-949780:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1843816124/001/cp-test_ha-949780.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 cp ha-949780:/home/docker/cp-test.txt ha-949780-m02:/home/docker/cp-test_ha-949780_ha-949780-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780-m02 "sudo cat /home/docker/cp-test_ha-949780_ha-949780-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 cp ha-949780:/home/docker/cp-test.txt ha-949780-m03:/home/docker/cp-test_ha-949780_ha-949780-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780-m03 "sudo cat /home/docker/cp-test_ha-949780_ha-949780-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 cp ha-949780:/home/docker/cp-test.txt ha-949780-m04:/home/docker/cp-test_ha-949780_ha-949780-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780-m04 "sudo cat /home/docker/cp-test_ha-949780_ha-949780-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 cp testdata/cp-test.txt ha-949780-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 cp ha-949780-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1843816124/001/cp-test_ha-949780-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 cp ha-949780-m02:/home/docker/cp-test.txt ha-949780:/home/docker/cp-test_ha-949780-m02_ha-949780.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780 "sudo cat /home/docker/cp-test_ha-949780-m02_ha-949780.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 cp ha-949780-m02:/home/docker/cp-test.txt ha-949780-m03:/home/docker/cp-test_ha-949780-m02_ha-949780-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780-m03 "sudo cat /home/docker/cp-test_ha-949780-m02_ha-949780-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 cp ha-949780-m02:/home/docker/cp-test.txt ha-949780-m04:/home/docker/cp-test_ha-949780-m02_ha-949780-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780-m04 "sudo cat /home/docker/cp-test_ha-949780-m02_ha-949780-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 cp testdata/cp-test.txt ha-949780-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 cp ha-949780-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1843816124/001/cp-test_ha-949780-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 cp ha-949780-m03:/home/docker/cp-test.txt ha-949780:/home/docker/cp-test_ha-949780-m03_ha-949780.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780 "sudo cat /home/docker/cp-test_ha-949780-m03_ha-949780.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 cp ha-949780-m03:/home/docker/cp-test.txt ha-949780-m02:/home/docker/cp-test_ha-949780-m03_ha-949780-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780-m02 "sudo cat /home/docker/cp-test_ha-949780-m03_ha-949780-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 cp ha-949780-m03:/home/docker/cp-test.txt ha-949780-m04:/home/docker/cp-test_ha-949780-m03_ha-949780-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780-m04 "sudo cat /home/docker/cp-test_ha-949780-m03_ha-949780-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 cp testdata/cp-test.txt ha-949780-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 cp ha-949780-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1843816124/001/cp-test_ha-949780-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 cp ha-949780-m04:/home/docker/cp-test.txt ha-949780:/home/docker/cp-test_ha-949780-m04_ha-949780.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780 "sudo cat /home/docker/cp-test_ha-949780-m04_ha-949780.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 cp ha-949780-m04:/home/docker/cp-test.txt ha-949780-m02:/home/docker/cp-test_ha-949780-m04_ha-949780-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780-m02 "sudo cat /home/docker/cp-test_ha-949780-m04_ha-949780-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 cp ha-949780-m04:/home/docker/cp-test.txt ha-949780-m03:/home/docker/cp-test_ha-949780-m04_ha-949780-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 ssh -n ha-949780-m03 "sudo cat /home/docker/cp-test_ha-949780-m04_ha-949780-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (11.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (35.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 node stop m02 --alsologtostderr -v 5
E1227 20:08:24.900855   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:08:24.906218   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:08:24.916608   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:08:24.937001   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:08:24.977394   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:08:25.057832   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:08:25.218385   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:08:25.539044   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:08:26.180022   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:08:27.460885   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:08:30.022698   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:08:35.143884   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-949780 node stop m02 --alsologtostderr -v 5: (34.973060512s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-949780 status --alsologtostderr -v 5: exit status 7 (545.712913ms)

                                                
                                                
-- stdout --
	ha-949780
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-949780-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-949780-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-949780-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:08:43.558165   71516 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:08:43.558352   71516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:08:43.558368   71516 out.go:374] Setting ErrFile to fd 2...
	I1227 20:08:43.558375   71516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:08:43.558629   71516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-59055/.minikube/bin
	I1227 20:08:43.558849   71516 out.go:368] Setting JSON to false
	I1227 20:08:43.558877   71516 mustload.go:66] Loading cluster: ha-949780
	I1227 20:08:43.559015   71516 notify.go:221] Checking for updates...
	I1227 20:08:43.559273   71516 config.go:182] Loaded profile config "ha-949780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:08:43.559297   71516 status.go:174] checking status of ha-949780 ...
	I1227 20:08:43.561628   71516 status.go:371] ha-949780 host status = "Running" (err=<nil>)
	I1227 20:08:43.561648   71516 host.go:66] Checking if "ha-949780" exists ...
	I1227 20:08:43.564641   71516 main.go:144] libmachine: domain ha-949780 has defined MAC address 52:54:00:88:b8:c7 in network mk-ha-949780
	I1227 20:08:43.565093   71516 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:88:b8:c7", ip: ""} in network mk-ha-949780: {Iface:virbr1 ExpiryTime:2025-12-27 21:04:21 +0000 UTC Type:0 Mac:52:54:00:88:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-949780 Clientid:01:52:54:00:88:b8:c7}
	I1227 20:08:43.565117   71516 main.go:144] libmachine: domain ha-949780 has defined IP address 192.168.39.3 and MAC address 52:54:00:88:b8:c7 in network mk-ha-949780
	I1227 20:08:43.565245   71516 host.go:66] Checking if "ha-949780" exists ...
	I1227 20:08:43.565459   71516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:08:43.567791   71516 main.go:144] libmachine: domain ha-949780 has defined MAC address 52:54:00:88:b8:c7 in network mk-ha-949780
	I1227 20:08:43.568233   71516 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:88:b8:c7", ip: ""} in network mk-ha-949780: {Iface:virbr1 ExpiryTime:2025-12-27 21:04:21 +0000 UTC Type:0 Mac:52:54:00:88:b8:c7 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-949780 Clientid:01:52:54:00:88:b8:c7}
	I1227 20:08:43.568260   71516 main.go:144] libmachine: domain ha-949780 has defined IP address 192.168.39.3 and MAC address 52:54:00:88:b8:c7 in network mk-ha-949780
	I1227 20:08:43.568414   71516 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22332-59055/.minikube/machines/ha-949780/id_rsa Username:docker}
	I1227 20:08:43.650029   71516 ssh_runner.go:195] Run: systemctl --version
	I1227 20:08:43.656375   71516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:08:43.674939   71516 kubeconfig.go:125] found "ha-949780" server: "https://192.168.39.254:8443"
	I1227 20:08:43.674989   71516 api_server.go:166] Checking apiserver status ...
	I1227 20:08:43.675038   71516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:08:43.697904   71516 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1470/cgroup
	I1227 20:08:43.715612   71516 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1470/cgroup
	I1227 20:08:43.727845   71516 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f16c2aa6674b1b436d93c7fb4aae7df.slice/crio-f89edcafc99d2cc55d06d30b855bc9b0db0d68fdbd8114a6845bf1b132c41c2e.scope/cgroup.freeze
	I1227 20:08:43.741923   71516 api_server.go:299] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1227 20:08:43.747332   71516 api_server.go:325] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1227 20:08:43.747365   71516 status.go:463] ha-949780 apiserver status = Running (err=<nil>)
	I1227 20:08:43.747375   71516 status.go:176] ha-949780 status: &{Name:ha-949780 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:08:43.747397   71516 status.go:174] checking status of ha-949780-m02 ...
	I1227 20:08:43.749086   71516 status.go:371] ha-949780-m02 host status = "Stopped" (err=<nil>)
	I1227 20:08:43.749109   71516 status.go:384] host is not running, skipping remaining checks
	I1227 20:08:43.749115   71516 status.go:176] ha-949780-m02 status: &{Name:ha-949780-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:08:43.749133   71516 status.go:174] checking status of ha-949780-m03 ...
	I1227 20:08:43.750569   71516 status.go:371] ha-949780-m03 host status = "Running" (err=<nil>)
	I1227 20:08:43.750594   71516 host.go:66] Checking if "ha-949780-m03" exists ...
	I1227 20:08:43.753221   71516 main.go:144] libmachine: domain ha-949780-m03 has defined MAC address 52:54:00:12:d8:62 in network mk-ha-949780
	I1227 20:08:43.753626   71516 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:12:d8:62", ip: ""} in network mk-ha-949780: {Iface:virbr1 ExpiryTime:2025-12-27 21:06:11 +0000 UTC Type:0 Mac:52:54:00:12:d8:62 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-949780-m03 Clientid:01:52:54:00:12:d8:62}
	I1227 20:08:43.753646   71516 main.go:144] libmachine: domain ha-949780-m03 has defined IP address 192.168.39.36 and MAC address 52:54:00:12:d8:62 in network mk-ha-949780
	I1227 20:08:43.753795   71516 host.go:66] Checking if "ha-949780-m03" exists ...
	I1227 20:08:43.754001   71516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:08:43.756466   71516 main.go:144] libmachine: domain ha-949780-m03 has defined MAC address 52:54:00:12:d8:62 in network mk-ha-949780
	I1227 20:08:43.757024   71516 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:12:d8:62", ip: ""} in network mk-ha-949780: {Iface:virbr1 ExpiryTime:2025-12-27 21:06:11 +0000 UTC Type:0 Mac:52:54:00:12:d8:62 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:ha-949780-m03 Clientid:01:52:54:00:12:d8:62}
	I1227 20:08:43.757058   71516 main.go:144] libmachine: domain ha-949780-m03 has defined IP address 192.168.39.36 and MAC address 52:54:00:12:d8:62 in network mk-ha-949780
	I1227 20:08:43.757224   71516 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22332-59055/.minikube/machines/ha-949780-m03/id_rsa Username:docker}
	I1227 20:08:43.840957   71516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:08:43.861572   71516 kubeconfig.go:125] found "ha-949780" server: "https://192.168.39.254:8443"
	I1227 20:08:43.861609   71516 api_server.go:166] Checking apiserver status ...
	I1227 20:08:43.861679   71516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:08:43.883004   71516 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1730/cgroup
	I1227 20:08:43.895296   71516 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1730/cgroup
	I1227 20:08:43.911064   71516 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbad16e18208fac268e5b557020eb9e08.slice/crio-e8d00bbb9022ddc2cfe549753d79ae82a387dc2c16a07977d71e96d99826c66e.scope/cgroup.freeze
	I1227 20:08:43.923548   71516 api_server.go:299] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1227 20:08:43.928461   71516 api_server.go:325] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1227 20:08:43.928490   71516 status.go:463] ha-949780-m03 apiserver status = Running (err=<nil>)
	I1227 20:08:43.928499   71516 status.go:176] ha-949780-m03 status: &{Name:ha-949780-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:08:43.928523   71516 status.go:174] checking status of ha-949780-m04 ...
	I1227 20:08:43.930252   71516 status.go:371] ha-949780-m04 host status = "Running" (err=<nil>)
	I1227 20:08:43.930272   71516 host.go:66] Checking if "ha-949780-m04" exists ...
	I1227 20:08:43.933316   71516 main.go:144] libmachine: domain ha-949780-m04 has defined MAC address 52:54:00:5b:36:b9 in network mk-ha-949780
	I1227 20:08:43.933860   71516 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5b:36:b9", ip: ""} in network mk-ha-949780: {Iface:virbr1 ExpiryTime:2025-12-27 21:07:31 +0000 UTC Type:0 Mac:52:54:00:5b:36:b9 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-949780-m04 Clientid:01:52:54:00:5b:36:b9}
	I1227 20:08:43.933903   71516 main.go:144] libmachine: domain ha-949780-m04 has defined IP address 192.168.39.114 and MAC address 52:54:00:5b:36:b9 in network mk-ha-949780
	I1227 20:08:43.934114   71516 host.go:66] Checking if "ha-949780-m04" exists ...
	I1227 20:08:43.934404   71516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:08:43.936931   71516 main.go:144] libmachine: domain ha-949780-m04 has defined MAC address 52:54:00:5b:36:b9 in network mk-ha-949780
	I1227 20:08:43.937364   71516 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5b:36:b9", ip: ""} in network mk-ha-949780: {Iface:virbr1 ExpiryTime:2025-12-27 21:07:31 +0000 UTC Type:0 Mac:52:54:00:5b:36:b9 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-949780-m04 Clientid:01:52:54:00:5b:36:b9}
	I1227 20:08:43.937391   71516 main.go:144] libmachine: domain ha-949780-m04 has defined IP address 192.168.39.114 and MAC address 52:54:00:5b:36:b9 in network mk-ha-949780
	I1227 20:08:43.937553   71516 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22332-59055/.minikube/machines/ha-949780-m04/id_rsa Username:docker}
	I1227 20:08:44.020038   71516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:08:44.041164   71516 status.go:176] ha-949780-m04 status: &{Name:ha-949780-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (35.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (32.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 node start m02 --alsologtostderr -v 5
E1227 20:08:45.384064   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:09:05.865257   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-949780 node start m02 --alsologtostderr -v 5: (31.473313024s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-949780 status --alsologtostderr -v 5: (1.000425107s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (32.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (221.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 stop --alsologtostderr -v 5
E1227 20:09:46.825466   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:11:08.747539   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-949780 stop --alsologtostderr -v 5: (1m51.762437349s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 start --wait true --alsologtostderr -v 5
E1227 20:12:16.303910   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-949780 start --wait true --alsologtostderr -v 5: (1m49.835626286s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (221.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-949780 node delete m03 --alsologtostderr -v 5: (8.707834102s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (104.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 stop --alsologtostderr -v 5
E1227 20:13:24.900573   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:13:52.588678   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-949780 stop --alsologtostderr -v 5: (1m44.320469933s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-949780 status --alsologtostderr -v 5: exit status 7 (67.369759ms)

                                                
                                                
-- stdout --
	ha-949780
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-949780-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-949780-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:14:54.240114   73745 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:14:54.240399   73745 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:14:54.240410   73745 out.go:374] Setting ErrFile to fd 2...
	I1227 20:14:54.240414   73745 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:14:54.240632   73745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-59055/.minikube/bin
	I1227 20:14:54.240811   73745 out.go:368] Setting JSON to false
	I1227 20:14:54.240839   73745 mustload.go:66] Loading cluster: ha-949780
	I1227 20:14:54.241013   73745 notify.go:221] Checking for updates...
	I1227 20:14:54.241241   73745 config.go:182] Loaded profile config "ha-949780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:14:54.241261   73745 status.go:174] checking status of ha-949780 ...
	I1227 20:14:54.243440   73745 status.go:371] ha-949780 host status = "Stopped" (err=<nil>)
	I1227 20:14:54.243457   73745 status.go:384] host is not running, skipping remaining checks
	I1227 20:14:54.243463   73745 status.go:176] ha-949780 status: &{Name:ha-949780 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:14:54.243479   73745 status.go:174] checking status of ha-949780-m02 ...
	I1227 20:14:54.244804   73745 status.go:371] ha-949780-m02 host status = "Stopped" (err=<nil>)
	I1227 20:14:54.244822   73745 status.go:384] host is not running, skipping remaining checks
	I1227 20:14:54.244826   73745 status.go:176] ha-949780-m02 status: &{Name:ha-949780-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:14:54.244838   73745 status.go:174] checking status of ha-949780-m04 ...
	I1227 20:14:54.246622   73745 status.go:371] ha-949780-m04 host status = "Stopped" (err=<nil>)
	I1227 20:14:54.246638   73745 status.go:384] host is not running, skipping remaining checks
	I1227 20:14:54.246644   73745 status.go:176] ha-949780-m04 status: &{Name:ha-949780-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (104.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (92.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-949780 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m31.476713913s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (92.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (63.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 node add --control-plane --alsologtostderr -v 5
E1227 20:17:16.304105   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-949780 node add --control-plane --alsologtostderr -v 5: (1m2.833982091s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-949780 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (63.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                    
x
+
TestJSONOutput/start/Command (83.91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-583591 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1227 20:18:24.907474   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:18:39.354404   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-583591 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m23.906908689s)
--- PASS: TestJSONOutput/start/Command (83.91s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-583591 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-583591 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (37.47s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-583591 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-583591 --output=json --user=testUser: (37.473572989s)
--- PASS: TestJSONOutput/stop/Command (37.47s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-636876 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-636876 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (80.738014ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a3305d88-fb31-4d40-b936-dee818f44c8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-636876] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1cba00ed-59a3-4ade-a147-2d7af9935c93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22332"}}
	{"specversion":"1.0","id":"fea68efb-a0b6-46a4-8eef-e3d9114af75b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"81bf7cc0-a0a9-4e9a-996b-dbfccb1c1c42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22332-59055/kubeconfig"}}
	{"specversion":"1.0","id":"5a5b9b2b-adc4-4a96-9111-7d3ae75fad68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-59055/.minikube"}}
	{"specversion":"1.0","id":"8b48605b-9bba-450c-975b-55b1f3dfb638","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8d9fbe35-5aad-4084-ab5e-2ad3ace19cc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fb0d5e62-ed90-4d53-abfc-bc8df024628e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-636876" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-636876
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (73.43s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-494163 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-494163 --driver=kvm2  --container-runtime=crio: (34.540823281s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-497300 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-497300 --driver=kvm2  --container-runtime=crio: (36.189861444s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-494163
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-497300
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-497300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-497300
helpers_test.go:176: Cleaning up "first-494163" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-494163
--- PASS: TestMinikubeProfile (73.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (23.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-988815 --memory=3072 --mount-string /tmp/TestMountStartserial4257683003/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-988815 --memory=3072 --mount-string /tmp/TestMountStartserial4257683003/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.370380553s)
--- PASS: TestMountStart/serial/StartWithMountFirst (23.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-988815 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-988815 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (20.58s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-005886 --memory=3072 --mount-string /tmp/TestMountStartserial4257683003/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-005886 --memory=3072 --mount-string /tmp/TestMountStartserial4257683003/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.584030419s)
--- PASS: TestMountStart/serial/StartWithMountSecond (20.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-005886 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-005886 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-988815 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-005886 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-005886 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-005886
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-005886: (1.268219687s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (17.9s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-005886
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-005886: (16.898629618s)
--- PASS: TestMountStart/serial/RestartStopped (17.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-005886 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-005886 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (93.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-076320 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1227 20:22:16.303238   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:23:24.900108   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-076320 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m33.568126029s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (93.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-076320 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-076320 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-076320 -- rollout status deployment/busybox: (2.650496554s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-076320 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-076320 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-076320 -- exec busybox-769dd8b7dd-cj622 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-076320 -- exec busybox-769dd8b7dd-j7kp9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-076320 -- exec busybox-769dd8b7dd-cj622 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-076320 -- exec busybox-769dd8b7dd-j7kp9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-076320 -- exec busybox-769dd8b7dd-cj622 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-076320 -- exec busybox-769dd8b7dd-j7kp9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.22s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-076320 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-076320 -- exec busybox-769dd8b7dd-cj622 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-076320 -- exec busybox-769dd8b7dd-cj622 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-076320 -- exec busybox-769dd8b7dd-j7kp9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-076320 -- exec busybox-769dd8b7dd-j7kp9 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-076320 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-076320 -v=5 --alsologtostderr: (41.142422141s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.62s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-076320 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.49s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 cp testdata/cp-test.txt multinode-076320:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 ssh -n multinode-076320 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 cp multinode-076320:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3324565519/001/cp-test_multinode-076320.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 ssh -n multinode-076320 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 cp multinode-076320:/home/docker/cp-test.txt multinode-076320-m02:/home/docker/cp-test_multinode-076320_multinode-076320-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 ssh -n multinode-076320 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 ssh -n multinode-076320-m02 "sudo cat /home/docker/cp-test_multinode-076320_multinode-076320-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 cp multinode-076320:/home/docker/cp-test.txt multinode-076320-m03:/home/docker/cp-test_multinode-076320_multinode-076320-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 ssh -n multinode-076320 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 ssh -n multinode-076320-m03 "sudo cat /home/docker/cp-test_multinode-076320_multinode-076320-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 cp testdata/cp-test.txt multinode-076320-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 ssh -n multinode-076320-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 cp multinode-076320-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3324565519/001/cp-test_multinode-076320-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 ssh -n multinode-076320-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 cp multinode-076320-m02:/home/docker/cp-test.txt multinode-076320:/home/docker/cp-test_multinode-076320-m02_multinode-076320.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 ssh -n multinode-076320-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 ssh -n multinode-076320 "sudo cat /home/docker/cp-test_multinode-076320-m02_multinode-076320.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 cp multinode-076320-m02:/home/docker/cp-test.txt multinode-076320-m03:/home/docker/cp-test_multinode-076320-m02_multinode-076320-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 ssh -n multinode-076320-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 ssh -n multinode-076320-m03 "sudo cat /home/docker/cp-test_multinode-076320-m02_multinode-076320-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 cp testdata/cp-test.txt multinode-076320-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 ssh -n multinode-076320-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 cp multinode-076320-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3324565519/001/cp-test_multinode-076320-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 ssh -n multinode-076320-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 cp multinode-076320-m03:/home/docker/cp-test.txt multinode-076320:/home/docker/cp-test_multinode-076320-m03_multinode-076320.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 ssh -n multinode-076320-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 ssh -n multinode-076320 "sudo cat /home/docker/cp-test_multinode-076320-m03_multinode-076320.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 cp multinode-076320-m03:/home/docker/cp-test.txt multinode-076320-m02:/home/docker/cp-test_multinode-076320-m03_multinode-076320-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 ssh -n multinode-076320-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 ssh -n multinode-076320-m02 "sudo cat /home/docker/cp-test_multinode-076320-m03_multinode-076320-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-076320 node stop m03: (1.533280195s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-076320 status: exit status 7 (353.179488ms)

                                                
                                                
-- stdout --
	multinode-076320
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-076320-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-076320-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-076320 status --alsologtostderr: exit status 7 (364.946791ms)

                                                
                                                
-- stdout --
	multinode-076320
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-076320-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-076320-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:24:26.684681   79621 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:24:26.684970   79621 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:24:26.684981   79621 out.go:374] Setting ErrFile to fd 2...
	I1227 20:24:26.684985   79621 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:24:26.685194   79621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-59055/.minikube/bin
	I1227 20:24:26.685366   79621 out.go:368] Setting JSON to false
	I1227 20:24:26.685389   79621 mustload.go:66] Loading cluster: multinode-076320
	I1227 20:24:26.685510   79621 notify.go:221] Checking for updates...
	I1227 20:24:26.685811   79621 config.go:182] Loaded profile config "multinode-076320": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:24:26.685832   79621 status.go:174] checking status of multinode-076320 ...
	I1227 20:24:26.688075   79621 status.go:371] multinode-076320 host status = "Running" (err=<nil>)
	I1227 20:24:26.688099   79621 host.go:66] Checking if "multinode-076320" exists ...
	I1227 20:24:26.691237   79621 main.go:144] libmachine: domain multinode-076320 has defined MAC address 52:54:00:0b:68:6c in network mk-multinode-076320
	I1227 20:24:26.691809   79621 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:68:6c", ip: ""} in network mk-multinode-076320: {Iface:virbr1 ExpiryTime:2025-12-27 21:22:12 +0000 UTC Type:0 Mac:52:54:00:0b:68:6c Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:multinode-076320 Clientid:01:52:54:00:0b:68:6c}
	I1227 20:24:26.691839   79621 main.go:144] libmachine: domain multinode-076320 has defined IP address 192.168.39.207 and MAC address 52:54:00:0b:68:6c in network mk-multinode-076320
	I1227 20:24:26.692009   79621 host.go:66] Checking if "multinode-076320" exists ...
	I1227 20:24:26.692300   79621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:24:26.695012   79621 main.go:144] libmachine: domain multinode-076320 has defined MAC address 52:54:00:0b:68:6c in network mk-multinode-076320
	I1227 20:24:26.695433   79621 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:68:6c", ip: ""} in network mk-multinode-076320: {Iface:virbr1 ExpiryTime:2025-12-27 21:22:12 +0000 UTC Type:0 Mac:52:54:00:0b:68:6c Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:multinode-076320 Clientid:01:52:54:00:0b:68:6c}
	I1227 20:24:26.695456   79621 main.go:144] libmachine: domain multinode-076320 has defined IP address 192.168.39.207 and MAC address 52:54:00:0b:68:6c in network mk-multinode-076320
	I1227 20:24:26.695680   79621 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22332-59055/.minikube/machines/multinode-076320/id_rsa Username:docker}
	I1227 20:24:26.779533   79621 ssh_runner.go:195] Run: systemctl --version
	I1227 20:24:26.786464   79621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:24:26.816739   79621 kubeconfig.go:125] found "multinode-076320" server: "https://192.168.39.207:8443"
	I1227 20:24:26.816780   79621 api_server.go:166] Checking apiserver status ...
	I1227 20:24:26.816828   79621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:24:26.836287   79621 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1399/cgroup
	I1227 20:24:26.848473   79621 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1399/cgroup
	I1227 20:24:26.860244   79621 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d71346556bfaae63c0d9498c2118102.slice/crio-3b999b085aabe2d8a6e9cd49601e3f5ad3ea23d02649030b8498ae3877062d60.scope/cgroup.freeze
	I1227 20:24:26.872335   79621 api_server.go:299] Checking apiserver healthz at https://192.168.39.207:8443/healthz ...
	I1227 20:24:26.877831   79621 api_server.go:325] https://192.168.39.207:8443/healthz returned 200:
	ok
	I1227 20:24:26.877857   79621 status.go:463] multinode-076320 apiserver status = Running (err=<nil>)
	I1227 20:24:26.877866   79621 status.go:176] multinode-076320 status: &{Name:multinode-076320 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:24:26.877884   79621 status.go:174] checking status of multinode-076320-m02 ...
	I1227 20:24:26.879566   79621 status.go:371] multinode-076320-m02 host status = "Running" (err=<nil>)
	I1227 20:24:26.879585   79621 host.go:66] Checking if "multinode-076320-m02" exists ...
	I1227 20:24:26.882026   79621 main.go:144] libmachine: domain multinode-076320-m02 has defined MAC address 52:54:00:90:be:e5 in network mk-multinode-076320
	I1227 20:24:26.882413   79621 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:90:be:e5", ip: ""} in network mk-multinode-076320: {Iface:virbr1 ExpiryTime:2025-12-27 21:23:04 +0000 UTC Type:0 Mac:52:54:00:90:be:e5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-076320-m02 Clientid:01:52:54:00:90:be:e5}
	I1227 20:24:26.882436   79621 main.go:144] libmachine: domain multinode-076320-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:90:be:e5 in network mk-multinode-076320
	I1227 20:24:26.882594   79621 host.go:66] Checking if "multinode-076320-m02" exists ...
	I1227 20:24:26.882838   79621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:24:26.884965   79621 main.go:144] libmachine: domain multinode-076320-m02 has defined MAC address 52:54:00:90:be:e5 in network mk-multinode-076320
	I1227 20:24:26.885405   79621 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:90:be:e5", ip: ""} in network mk-multinode-076320: {Iface:virbr1 ExpiryTime:2025-12-27 21:23:04 +0000 UTC Type:0 Mac:52:54:00:90:be:e5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-076320-m02 Clientid:01:52:54:00:90:be:e5}
	I1227 20:24:26.885426   79621 main.go:144] libmachine: domain multinode-076320-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:90:be:e5 in network mk-multinode-076320
	I1227 20:24:26.885581   79621 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22332-59055/.minikube/machines/multinode-076320-m02/id_rsa Username:docker}
	I1227 20:24:26.970148   79621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:24:26.988612   79621 status.go:176] multinode-076320-m02 status: &{Name:multinode-076320-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:24:26.988652   79621 status.go:174] checking status of multinode-076320-m03 ...
	I1227 20:24:26.990302   79621 status.go:371] multinode-076320-m03 host status = "Stopped" (err=<nil>)
	I1227 20:24:26.990334   79621 status.go:384] host is not running, skipping remaining checks
	I1227 20:24:26.990341   79621 status.go:176] multinode-076320-m03 status: &{Name:multinode-076320-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 node start m03 -v=5 --alsologtostderr
E1227 20:24:47.949782   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-076320 node start m03 -v=5 --alsologtostderr: (37.581164603s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (195.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-076320
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-076320
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-076320: (1m10.01963522s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-076320 --wait=true -v=5 --alsologtostderr
E1227 20:27:16.303342   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-076320 --wait=true -v=5 --alsologtostderr: (2m5.745751613s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-076320
--- PASS: TestMultiNode/serial/RestartKeepsNodes (195.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-076320 node delete m03: (2.256666346s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.73s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (68.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 stop
E1227 20:28:24.900594   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-076320 stop: (1m8.387972122s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-076320 status: exit status 7 (64.445142ms)

                                                
                                                
-- stdout --
	multinode-076320
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-076320-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-076320 status --alsologtostderr: exit status 7 (63.406804ms)

                                                
                                                
-- stdout --
	multinode-076320
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-076320-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:29:32.253565   81319 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:29:32.253690   81319 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:32.253699   81319 out.go:374] Setting ErrFile to fd 2...
	I1227 20:29:32.253704   81319 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:32.253912   81319 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-59055/.minikube/bin
	I1227 20:29:32.254094   81319 out.go:368] Setting JSON to false
	I1227 20:29:32.254121   81319 mustload.go:66] Loading cluster: multinode-076320
	I1227 20:29:32.254264   81319 notify.go:221] Checking for updates...
	I1227 20:29:32.254464   81319 config.go:182] Loaded profile config "multinode-076320": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:29:32.254483   81319 status.go:174] checking status of multinode-076320 ...
	I1227 20:29:32.256543   81319 status.go:371] multinode-076320 host status = "Stopped" (err=<nil>)
	I1227 20:29:32.256561   81319 status.go:384] host is not running, skipping remaining checks
	I1227 20:29:32.256567   81319 status.go:176] multinode-076320 status: &{Name:multinode-076320 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:29:32.256584   81319 status.go:174] checking status of multinode-076320-m02 ...
	I1227 20:29:32.257948   81319 status.go:371] multinode-076320-m02 host status = "Stopped" (err=<nil>)
	I1227 20:29:32.257966   81319 status.go:384] host is not running, skipping remaining checks
	I1227 20:29:32.257972   81319 status.go:176] multinode-076320-m02 status: &{Name:multinode-076320-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (68.52s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (86.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-076320 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-076320 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m26.174944765s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-076320 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (86.67s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-076320
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-076320-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-076320-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (75.171582ms)

                                                
                                                
-- stdout --
	* [multinode-076320-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-59055/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-59055/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-076320-m02' is duplicated with machine name 'multinode-076320-m02' in profile 'multinode-076320'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-076320-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-076320-m03 --driver=kvm2  --container-runtime=crio: (37.496790304s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-076320
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-076320: exit status 80 (242.943975ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-076320 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-076320-m03 already exists in multinode-076320-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-076320-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.72s)

                                                
                                    
x
+
TestScheduledStopUnix (106.12s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-876142 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-876142 --memory=3072 --driver=kvm2  --container-runtime=crio: (32.696456058s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-876142 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 20:32:12.002764   82633 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:32:12.002897   82633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:32:12.002908   82633 out.go:374] Setting ErrFile to fd 2...
	I1227 20:32:12.002913   82633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:32:12.003146   82633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-59055/.minikube/bin
	I1227 20:32:12.003400   82633 out.go:368] Setting JSON to false
	I1227 20:32:12.003481   82633 mustload.go:66] Loading cluster: scheduled-stop-876142
	I1227 20:32:12.003814   82633 config.go:182] Loaded profile config "scheduled-stop-876142": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:32:12.003883   82633 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/scheduled-stop-876142/config.json ...
	I1227 20:32:12.004059   82633 mustload.go:66] Loading cluster: scheduled-stop-876142
	I1227 20:32:12.004162   82633 config.go:182] Loaded profile config "scheduled-stop-876142": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-876142 -n scheduled-stop-876142
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-876142 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 20:32:12.318231   82677 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:32:12.318526   82677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:32:12.318536   82677 out.go:374] Setting ErrFile to fd 2...
	I1227 20:32:12.318542   82677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:32:12.318785   82677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-59055/.minikube/bin
	I1227 20:32:12.319066   82677 out.go:368] Setting JSON to false
	I1227 20:32:12.319300   82677 daemonize_unix.go:73] killing process 82667 as it is an old scheduled stop
	I1227 20:32:12.319424   82677 mustload.go:66] Loading cluster: scheduled-stop-876142
	I1227 20:32:12.319878   82677 config.go:182] Loaded profile config "scheduled-stop-876142": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:32:12.319987   82677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/scheduled-stop-876142/config.json ...
	I1227 20:32:12.320202   82677 mustload.go:66] Loading cluster: scheduled-stop-876142
	I1227 20:32:12.320357   82677 config.go:182] Loaded profile config "scheduled-stop-876142": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1227 20:32:12.325258   62937 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/scheduled-stop-876142/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-876142 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1227 20:32:16.302938   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-876142 -n scheduled-stop-876142
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-876142
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-876142 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 20:32:38.053672   82826 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:32:38.053966   82826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:32:38.053977   82826 out.go:374] Setting ErrFile to fd 2...
	I1227 20:32:38.053982   82826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:32:38.054213   82826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-59055/.minikube/bin
	I1227 20:32:38.054480   82826 out.go:368] Setting JSON to false
	I1227 20:32:38.054575   82826 mustload.go:66] Loading cluster: scheduled-stop-876142
	I1227 20:32:38.054912   82826 config.go:182] Loaded profile config "scheduled-stop-876142": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:32:38.054990   82826 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/scheduled-stop-876142/config.json ...
	I1227 20:32:38.055190   82826 mustload.go:66] Loading cluster: scheduled-stop-876142
	I1227 20:32:38.055306   82826 config.go:182] Loaded profile config "scheduled-stop-876142": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-876142
scheduled_stop_test.go:218: (dbg) Done: out/minikube-linux-amd64 status -p scheduled-stop-876142: (1.802442649s)
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-876142 -n scheduled-stop-876142
E1227 20:33:24.900585   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-876142 -n scheduled-stop-876142: exit status 7 (62.476253ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-876142" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-876142
--- PASS: TestScheduledStopUnix (106.12s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (457.17s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1776238690 start -p running-upgrade-013380 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1776238690 start -p running-upgrade-013380 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m35.363886481s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-013380 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-013380 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (6m0.495594241s)
helpers_test.go:176: Cleaning up "running-upgrade-013380" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-013380
--- PASS: TestRunningBinaryUpgrade (457.17s)

                                                
                                    
x
+
TestKubernetesUpgrade (150.33s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-627460 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-627460 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.564931858s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-627460 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-627460 --alsologtostderr: (3.0718323s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-627460 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-627460 status --format={{.Host}}: exit status 7 (65.845882ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-627460 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1227 20:38:24.899953   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-627460 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.39757486s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-627460 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-627460 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-627460 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (176.243592ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-627460] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-59055/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-59055/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-627460
	    minikube start -p kubernetes-upgrade-627460 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6274602 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-627460 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-627460 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-627460 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (50.828128413s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-627460" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-627460
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-627460: (1.108988983s)
--- PASS: TestKubernetesUpgrade (150.33s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (144.79s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-033251 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-033251 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m48.56188697s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-033251 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-033251
E1227 20:35:19.356630   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-033251: (35.544522285s)
--- PASS: TestPreload/Start-NoPreload-PullImage (144.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-998433 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-998433 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (94.34193ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-998433] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-59055/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-59055/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (74.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-998433 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-998433 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m14.315677849s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-998433 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (74.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (116.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-998433 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-998433 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m55.706440058s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-998433 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-998433 status -o json: exit status 2 (203.32096ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-998433","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-998433
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (116.82s)

                                                
                                    
x
+
TestPause/serial/Start (167.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-274241 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-274241 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m47.99173613s)
--- PASS: TestPause/serial/Start (167.99s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (96.73s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-033251 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-033251 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m36.513146761s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-033251 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (96.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (43.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-998433 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1227 20:37:16.303561   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-998433 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (43.687195325s)
--- PASS: TestNoKubernetes/serial/Start (43.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22332-59055/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-998433 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-998433 "sudo systemctl is-active --quiet service kubelet": exit status 1 (171.709443ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-998433
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-998433: (1.354093231s)
--- PASS: TestNoKubernetes/serial/Stop (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (17.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-998433 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-998433 --driver=kvm2  --container-runtime=crio: (17.445652473s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (17.45s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (72.46s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-274241 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-274241 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m12.431688996s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (72.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-998433 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-998433 "sudo systemctl is-active --quiet service kubelet": exit status 1 (173.737323ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (102.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.101077875 start -p stopped-upgrade-527066 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.101077875 start -p stopped-upgrade-527066 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (57.393603115s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.101077875 -p stopped-upgrade-527066 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.101077875 -p stopped-upgrade-527066 stop: (1.975152888s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-527066 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-527066 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.698686015s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (102.07s)

                                                
                                    
x
+
TestPause/serial/Pause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-274241 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-274241 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-274241 --output=json --layout=cluster: exit status 2 (281.264354ms)

                                                
                                                
-- stdout --
	{"Name":"pause-274241","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-274241","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.85s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-274241 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.85s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.01s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-274241 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-274241 --alsologtostderr -v=5: (1.01307774s)
--- PASS: TestPause/serial/PauseAgain (1.01s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.92s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-274241 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.92s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (4.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.336205632s)
--- PASS: TestPause/serial/VerifyDeletedResources (4.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-527066
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-527066: (2.186399133s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.19s)

                                                
                                    
x
+
TestISOImage/Setup (20.16s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-664983 --no-kubernetes --memory=2500mb --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-664983 --no-kubernetes --memory=2500mb --driver=kvm2  --container-runtime=crio: (20.1627218s)
--- PASS: TestISOImage/Setup (20.16s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-664983 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-664983 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-664983 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-664983 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-664983 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-664983 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-664983 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-664983 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-664983 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-664983 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-664983 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-589895 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-589895 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (143.820938ms)

                                                
                                                
-- stdout --
	* [false-589895] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-59055/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-59055/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:39:52.168216   88475 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:39:52.168378   88475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:39:52.168389   88475 out.go:374] Setting ErrFile to fd 2...
	I1227 20:39:52.168396   88475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:39:52.168747   88475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-59055/.minikube/bin
	I1227 20:39:52.169414   88475 out.go:368] Setting JSON to false
	I1227 20:39:52.170665   88475 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":8542,"bootTime":1766859450,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 20:39:52.170772   88475 start.go:143] virtualization: kvm guest
	I1227 20:39:52.172761   88475 out.go:179] * [false-589895] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 20:39:52.174123   88475 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:39:52.174119   88475 notify.go:221] Checking for updates...
	I1227 20:39:52.177152   88475 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:39:52.178465   88475 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-59055/kubeconfig
	I1227 20:39:52.182980   88475 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-59055/.minikube
	I1227 20:39:52.184472   88475 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 20:39:52.185911   88475 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:39:52.187989   88475 config.go:182] Loaded profile config "force-systemd-env-774096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:39:52.188122   88475 config.go:182] Loaded profile config "guest-664983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1227 20:39:52.188246   88475 config.go:182] Loaded profile config "kubernetes-upgrade-627460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:39:52.188382   88475 config.go:182] Loaded profile config "running-upgrade-013380": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1227 20:39:52.188518   88475 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:39:52.234292   88475 out.go:179] * Using the kvm2 driver based on user configuration
	I1227 20:39:52.235499   88475 start.go:309] selected driver: kvm2
	I1227 20:39:52.235523   88475 start.go:928] validating driver "kvm2" against <nil>
	I1227 20:39:52.235536   88475 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:39:52.237672   88475 out.go:203] 
	W1227 20:39:52.238951   88475 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1227 20:39:52.240196   88475 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-589895 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-589895

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-589895

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-589895

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-589895

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-589895

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-589895

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-589895

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-589895

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-589895

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-589895

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-589895

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-589895" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-589895" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22332-59055/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 27 Dec 2025 20:39:03 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.72:8443
name: kubernetes-upgrade-627460
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22332-59055/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 27 Dec 2025 20:36:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.147:8443
name: running-upgrade-013380
contexts:
- context:
cluster: kubernetes-upgrade-627460
extensions:
- extension:
last-update: Sat, 27 Dec 2025 20:39:03 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-627460
name: kubernetes-upgrade-627460
- context:
cluster: running-upgrade-013380
user: running-upgrade-013380
name: running-upgrade-013380
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-627460
user:
client-certificate: /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/kubernetes-upgrade-627460/client.crt
client-key: /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/kubernetes-upgrade-627460/client.key
- name: running-upgrade-013380
user:
client-certificate: /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/running-upgrade-013380/client.crt
client-key: /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/running-upgrade-013380/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-589895

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589895"

                                                
                                                
----------------------- debugLogs end: false-589895 [took: 3.868316872s] --------------------------------
helpers_test.go:176: Cleaning up "false-589895" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-589895
--- PASS: TestNetworkPlugins/group/false (4.18s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (3.76s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-735391 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=kvm2  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-gcs-735391 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=kvm2  --container-runtime=crio: (3.616713218s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-735391" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-735391
--- PASS: TestPreload/PreloadSrc/gcs (3.76s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (4.92s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-github-881923 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=kvm2  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-github-881923 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=kvm2  --container-runtime=crio: (4.793861549s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-881923" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-github-881923
--- PASS: TestPreload/PreloadSrc/github (4.92s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.24s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-cached-649946 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=kvm2  --container-runtime=crio
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-649946" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-cached-649946
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (98.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-329307 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-329307 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m38.389501387s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (98.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (117.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-225351 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-225351 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0: (1m57.397053101s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (117.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (107.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-066649 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-066649 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0: (1m47.472991945s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (107.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-329307 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [becdf77b-0fbd-4b84-9067-64ee746fc3dd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1227 20:42:16.303632   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [becdf77b-0fbd-4b84-9067-64ee746fc3dd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004240429s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-329307 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-329307 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-329307 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.052523355s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-329307 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (33.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-329307 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-329307 --alsologtostderr -v=3: (33.658817411s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (33.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-225351 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [9fc93e83-4ccc-4909-bfc8-ac5b078a34a5] Pending
helpers_test.go:353: "busybox" [9fc93e83-4ccc-4909-bfc8-ac5b078a34a5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [9fc93e83-4ccc-4909-bfc8-ac5b078a34a5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.006154593s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-225351 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-066649 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [53430640-f0fb-4d04-be20-300f2f54eade] Pending
helpers_test.go:353: "busybox" [53430640-f0fb-4d04-be20-300f2f54eade] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [53430640-f0fb-4d04-be20-300f2f54eade] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004303069s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-066649 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-225351 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-225351 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (37.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-225351 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-225351 --alsologtostderr -v=3: (37.460583844s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (37.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-329307 -n old-k8s-version-329307
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-329307 -n old-k8s-version-329307: exit status 7 (65.845753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-329307 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (49.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-329307 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-329307 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (49.01120038s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-329307 -n old-k8s-version-329307
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (49.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-066649 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-066649 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (35.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-066649 --alsologtostderr -v=3
E1227 20:43:24.900439   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-066649 --alsologtostderr -v=3: (35.232899334s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (35.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-225351 -n no-preload-225351
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-225351 -n no-preload-225351: exit status 7 (67.778879ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-225351 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (53.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-225351 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-225351 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0: (52.880171218s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-225351 -n no-preload-225351
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (53.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-066649 -n default-k8s-diff-port-066649
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-066649 -n default-k8s-diff-port-066649: exit status 7 (84.753557ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-066649 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (66.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-066649 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-066649 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0: (1m5.730194213s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-066649 -n default-k8s-diff-port-066649
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (66.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (17.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-8djgk" [d973c850-3cc1-4ac0-967f-ff6d3b085432] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-8djgk" [d973c850-3cc1-4ac0-967f-ff6d3b085432] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.005964027s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (17.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-8djgk" [d973c850-3cc1-4ac0-967f-ff6d3b085432] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005079344s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-329307 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-329307 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-329307 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-329307 --alsologtostderr -v=1: (1.149360115s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-329307 -n old-k8s-version-329307
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-329307 -n old-k8s-version-329307: exit status 2 (290.131015ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-329307 -n old-k8s-version-329307
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-329307 -n old-k8s-version-329307: exit status 2 (289.325739ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-329307 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-329307 --alsologtostderr -v=1: (1.204407381s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-329307 -n old-k8s-version-329307
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-329307 -n old-k8s-version-329307
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-612308 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-612308 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0: (46.128692882s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-fcnkx" [af86067a-8254-472e-a480-2a0de1cd6fee] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-fcnkx" [af86067a-8254-472e-a480-2a0de1cd6fee] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.007335805s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-fcnkx" [af86067a-8254-472e-a480-2a0de1cd6fee] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.099291111s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-225351 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-kmqwf" [e771e211-3a2b-4a62-93e4-2c667f98eccb] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-kmqwf" [e771e211-3a2b-4a62-93e4-2c667f98eccb] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.006590265s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-225351 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-225351 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-225351 --alsologtostderr -v=1: (1.190080753s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-225351 -n no-preload-225351
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-225351 -n no-preload-225351: exit status 2 (281.285674ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-225351 -n no-preload-225351
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-225351 -n no-preload-225351: exit status 2 (293.617332ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-225351 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-225351 -n no-preload-225351
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-225351 -n no-preload-225351
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (81.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-596140 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-596140 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0: (1m21.678178381s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (81.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (103.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-589895 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-589895 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m43.7140851s)
--- PASS: TestNetworkPlugins/group/auto/Start (103.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-kmqwf" [e771e211-3a2b-4a62-93e4-2c667f98eccb] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006019701s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-066649 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-066649 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-066649 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-066649 --alsologtostderr -v=1: (1.032599068s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-066649 -n default-k8s-diff-port-066649
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-066649 -n default-k8s-diff-port-066649: exit status 2 (311.736938ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-066649 -n default-k8s-diff-port-066649
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-066649 -n default-k8s-diff-port-066649: exit status 2 (326.202258ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-066649 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-066649 --alsologtostderr -v=1: (1.006663609s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-066649 -n default-k8s-diff-port-066649
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-066649 -n default-k8s-diff-port-066649
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-612308 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-612308 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.268715035s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (35.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-612308 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-612308 --alsologtostderr -v=3: (35.98401007s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (35.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (90.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-589895 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-589895 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m30.68698024s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (90.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-612308 -n newest-cni-612308
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-612308 -n newest-cni-612308: exit status 7 (72.527941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-612308 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (51.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-612308 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-612308 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0: (50.720466113s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-612308 -n newest-cni-612308
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (51.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-596140 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [dd364cf8-88da-4e67-b754-d8b3e5a6567c] Pending
helpers_test.go:353: "busybox" [dd364cf8-88da-4e67-b754-d8b3e5a6567c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [dd364cf8-88da-4e67-b754-d8b3e5a6567c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00410327s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-596140 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-596140 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-596140 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (34.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-596140 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-596140 --alsologtostderr -v=3: (34.300912077s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (34.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-612308 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-612308 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-612308 -n newest-cni-612308
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-612308 -n newest-cni-612308: exit status 2 (240.01372ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-612308 -n newest-cni-612308
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-612308 -n newest-cni-612308: exit status 2 (236.211757ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-612308 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-612308 -n newest-cni-612308
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-612308 -n newest-cni-612308
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-mh27r" [1501d2d3-d779-45a9-b1c4-c239903aed26] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.008112815s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (98.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-589895 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-589895 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m38.763259936s)
--- PASS: TestNetworkPlugins/group/calico/Start (98.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-589895 "pgrep -a kubelet"
I1227 20:46:34.423973   62937 config.go:182] Loaded profile config "auto-589895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-589895 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-8dqbd" [8454fd6f-4a30-4cfd-9eba-c007fa94f211] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-8dqbd" [8454fd6f-4a30-4cfd-9eba-c007fa94f211] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.00717477s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-589895 "pgrep -a kubelet"
I1227 20:46:38.977325   62937 config.go:182] Loaded profile config "kindnet-589895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-589895 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-lgsfm" [e4396147-c351-4867-a5f4-381025ee94ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-lgsfm" [e4396147-c351-4867-a5f4-381025ee94ca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003744457s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-589895 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-589895 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-589895 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-589895 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-589895 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-589895 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-596140 -n embed-certs-596140
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-596140 -n embed-certs-596140: exit status 7 (69.534649ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-596140 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-596140 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-596140 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0: (49.935266234s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-596140 -n embed-certs-596140
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (83.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-589895 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-589895 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m23.904259524s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (83.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (96.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-589895 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1227 20:47:13.475640   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/old-k8s-version-329307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:13.481033   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/old-k8s-version-329307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:13.491444   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/old-k8s-version-329307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:13.511961   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/old-k8s-version-329307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:13.552459   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/old-k8s-version-329307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:13.632891   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/old-k8s-version-329307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:13.793635   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/old-k8s-version-329307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:14.114154   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/old-k8s-version-329307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:14.754908   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/old-k8s-version-329307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:16.035231   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/old-k8s-version-329307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:16.302992   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/addons-099251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:18.595946   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/old-k8s-version-329307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:23.716930   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/old-k8s-version-329307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:33.958083   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/old-k8s-version-329307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-589895 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m36.921635697s)
--- PASS: TestNetworkPlugins/group/flannel/Start (96.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-fdxzn" [a750eee0-647e-4956-90f0-025f86cfb90c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1227 20:47:46.471513   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/no-preload-225351/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:46.476840   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/no-preload-225351/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:46.487189   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/no-preload-225351/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:46.507551   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/no-preload-225351/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:46.547954   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/no-preload-225351/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:46.628357   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/no-preload-225351/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:46.788901   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/no-preload-225351/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:47.110009   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/no-preload-225351/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-fdxzn" [a750eee0-647e-4956-90f0-025f86cfb90c] Running
E1227 20:47:47.750460   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/no-preload-225351/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:49.031052   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/no-preload-225351/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:50.170069   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/default-k8s-diff-port-066649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:50.176050   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/default-k8s-diff-port-066649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:50.186640   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/default-k8s-diff-port-066649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:50.207455   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/default-k8s-diff-port-066649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:50.248216   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/default-k8s-diff-port-066649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:50.328552   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/default-k8s-diff-port-066649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:50.489698   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/default-k8s-diff-port-066649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:50.810463   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/default-k8s-diff-port-066649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:51.451324   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/default-k8s-diff-port-066649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:51.591856   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/no-preload-225351/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:52.731903   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/default-k8s-diff-port-066649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.00494018s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-fdxzn" [a750eee0-647e-4956-90f0-025f86cfb90c] Running
E1227 20:47:54.438500   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/old-k8s-version-329307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:55.292842   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/default-k8s-diff-port-066649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:56.712050   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/no-preload-225351/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005042831s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-596140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-596140 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-596140 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-596140 --alsologtostderr -v=1: (1.113424743s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-596140 -n embed-certs-596140
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-596140 -n embed-certs-596140: exit status 2 (331.605306ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-596140 -n embed-certs-596140
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-596140 -n embed-certs-596140: exit status 2 (318.846367ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-596140 --alsologtostderr -v=1
E1227 20:48:00.413262   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/default-k8s-diff-port-066649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-596140 --alsologtostderr -v=1: (1.25288167s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-596140 -n embed-certs-596140
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-596140 -n embed-certs-596140
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-589895 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1227 20:48:06.953240   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/no-preload-225351/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:48:10.654518   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/default-k8s-diff-port-066649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-589895 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m20.172376696s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-ftqqm" [51367908-f17c-4306-898e-5110a69acb32] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004709495s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-589895 "pgrep -a kubelet"
I1227 20:48:18.692130   62937 config.go:182] Loaded profile config "calico-589895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-589895 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-tkhfx" [0d8b7b1f-6a19-46f0-ae16-fe3959a81d32] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-tkhfx" [0d8b7b1f-6a19-46f0-ae16-fe3959a81d32] Running
I1227 20:48:24.145903   62937 config.go:182] Loaded profile config "custom-flannel-589895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005693004s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-589895 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-589895 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-cvzfv" [facbccbc-2deb-4eeb-8b60-15828e197d59] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1227 20:48:24.899831   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/functional-866869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:48:27.434351   62937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/no-preload-225351/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-cvzfv" [facbccbc-2deb-4eeb-8b60-15828e197d59] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.007005372s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-589895 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-589895 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-589895 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-589895 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-589895 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-589895 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-6vc98" [70a65e2a-a55c-49ef-a09e-ecd1a7d15d60] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005302879s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (76.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-589895 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-589895 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m16.62085334s)
--- PASS: TestNetworkPlugins/group/bridge/Start (76.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-589895 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-589895 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-sks2c" [6474f3a2-5f3c-4e43-aebf-f3077bdb9781] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-sks2c" [6474f3a2-5f3c-4e43-aebf-f3077bdb9781] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005011732s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.36s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-664983 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-664983 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-664983 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-664983 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-664983 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-664983 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-664983 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.22s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.2s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-664983 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   iso_version: v1.37.0-1766811082-22332
iso_test.go:118:   kicbase_version: v0.0.48-1766570851-22316
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: 01d0f336cc4dfc10d1d838788fc6a0b3aff80c3e
--- PASS: TestISOImage/VersionJSON (0.20s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.19s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-664983 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-589895 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-589895 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-589895 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-589895 "pgrep -a kubelet"
I1227 20:49:24.300042   62937 config.go:182] Loaded profile config "enable-default-cni-589895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-589895 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-mlf8j" [f7e62e91-0733-4af2-8403-f70fca36ed10] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-mlf8j" [f7e62e91-0733-4af2-8403-f70fca36ed10] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004498501s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-589895 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-589895 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-589895 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-589895 "pgrep -a kubelet"
I1227 20:50:02.418411   62937 config.go:182] Loaded profile config "bridge-589895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-589895 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-rwn8r" [d7df159d-1561-4fbd-81d0-3de41fa5ca84] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-rwn8r" [d7df159d-1561-4fbd-81d0-3de41fa5ca84] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004292655s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-589895 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-589895 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-589895 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (40/355)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.35.0/cached-images 0
15 TestDownloadOnly/v1.35.0/binaries 0
16 TestDownloadOnly/v1.35.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.29
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
119 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
121 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
125 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
276 TestStartStop/group/disable-driver-mounts 0.19
302 TestNetworkPlugins/group/kubenet 3.95
310 TestNetworkPlugins/group/cilium 4.4
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-099251 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-749766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-749766
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-589895 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-589895

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-589895

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-589895

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-589895

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-589895

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-589895

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-589895

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-589895

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-589895

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-589895

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-589895

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-589895" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-589895" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22332-59055/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 27 Dec 2025 20:39:03 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.72:8443
name: kubernetes-upgrade-627460
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22332-59055/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 27 Dec 2025 20:36:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.147:8443
name: running-upgrade-013380
contexts:
- context:
cluster: kubernetes-upgrade-627460
extensions:
- extension:
last-update: Sat, 27 Dec 2025 20:39:03 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-627460
name: kubernetes-upgrade-627460
- context:
cluster: running-upgrade-013380
user: running-upgrade-013380
name: running-upgrade-013380
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-627460
user:
client-certificate: /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/kubernetes-upgrade-627460/client.crt
client-key: /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/kubernetes-upgrade-627460/client.key
- name: running-upgrade-013380
user:
client-certificate: /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/running-upgrade-013380/client.crt
client-key: /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/running-upgrade-013380/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-589895

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589895"

                                                
                                                
----------------------- debugLogs end: kubenet-589895 [took: 3.773525421s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-589895" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-589895
--- SKIP: TestNetworkPlugins/group/kubenet (3.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-589895 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-589895

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-589895

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-589895

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-589895

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-589895

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-589895

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-589895

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-589895

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-589895

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-589895

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-589895

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-589895" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-589895

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-589895

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-589895

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-589895

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-589895" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-589895" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22332-59055/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 27 Dec 2025 20:36:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.147:8443
name: running-upgrade-013380
contexts:
- context:
cluster: running-upgrade-013380
user: running-upgrade-013380
name: running-upgrade-013380
current-context: ""
kind: Config
users:
- name: running-upgrade-013380
user:
client-certificate: /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/running-upgrade-013380/client.crt
client-key: /home/jenkins/minikube-integration/22332-59055/.minikube/profiles/running-upgrade-013380/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-589895

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-589895" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589895"

                                                
                                                
----------------------- debugLogs end: cilium-589895 [took: 4.223473151s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-589895" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-589895
--- SKIP: TestNetworkPlugins/group/cilium (4.40s)

                                                
                                    
Copied to clipboard