Test Report: KVM_Linux 22186

                    
                      5e28b85a1d78221970a3d6d4a20cdd5c3710ee83:2025-12-17:42830
                    
                

Test fail (11/452)

x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (286.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-240388 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1217 19:33:53.568386  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:34:21.273576  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:35:10.408975  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:35:10.415455  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:35:10.426957  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:35:10.448466  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:35:10.489957  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:35:10.571444  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:35:10.733050  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:35:11.054858  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:35:11.697026  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:35:12.978662  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:35:15.541770  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:35:20.663185  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:35:30.905466  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:35:51.387029  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:36:32.350382  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-240388 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (4m45.319104699s)

                                                
                                                
-- stdout --
	* [functional-240388] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-255930/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-255930/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "functional-240388" primary control-plane node in "functional-240388" cluster
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-amd64 start -p functional-240388 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:776: restart took 4m45.319365241s for "functional-240388" cluster.
I1217 19:37:51.467495  259985 config.go:182] Loaded profile config "functional-240388": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-240388 -n functional-240388
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-750489 image ls --format table --alsologtostderr                                                          │ functional-750489 │ jenkins │ v1.37.0 │ 17 Dec 25 19:30 UTC │ 17 Dec 25 19:30 UTC │
	│ service │ functional-750489 service list                                                                                       │ functional-750489 │ jenkins │ v1.37.0 │ 17 Dec 25 19:30 UTC │ 17 Dec 25 19:30 UTC │
	│ service │ functional-750489 service list -o json                                                                               │ functional-750489 │ jenkins │ v1.37.0 │ 17 Dec 25 19:30 UTC │ 17 Dec 25 19:31 UTC │
	│ service │ functional-750489 service --namespace=default --https --url hello-node                                               │ functional-750489 │ jenkins │ v1.37.0 │ 17 Dec 25 19:31 UTC │ 17 Dec 25 19:31 UTC │
	│ service │ functional-750489 service hello-node --url --format={{.IP}}                                                          │ functional-750489 │ jenkins │ v1.37.0 │ 17 Dec 25 19:31 UTC │ 17 Dec 25 19:31 UTC │
	│ service │ functional-750489 service hello-node --url                                                                           │ functional-750489 │ jenkins │ v1.37.0 │ 17 Dec 25 19:31 UTC │ 17 Dec 25 19:31 UTC │
	│ delete  │ -p functional-750489                                                                                                 │ functional-750489 │ jenkins │ v1.37.0 │ 17 Dec 25 19:31 UTC │ 17 Dec 25 19:31 UTC │
	│ start   │ -p functional-240388 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --kubernetes-version=v1.35.0-rc.1 │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:31 UTC │ 17 Dec 25 19:31 UTC │
	│ start   │ -p functional-240388 --alsologtostderr -v=8                                                                          │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:31 UTC │ 17 Dec 25 19:32 UTC │
	│ cache   │ functional-240388 cache add registry.k8s.io/pause:3.1                                                                │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:32 UTC │ 17 Dec 25 19:32 UTC │
	│ cache   │ functional-240388 cache add registry.k8s.io/pause:3.3                                                                │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:32 UTC │ 17 Dec 25 19:33 UTC │
	│ cache   │ functional-240388 cache add registry.k8s.io/pause:latest                                                             │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │ 17 Dec 25 19:33 UTC │
	│ cache   │ functional-240388 cache add minikube-local-cache-test:functional-240388                                              │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │ 17 Dec 25 19:33 UTC │
	│ cache   │ functional-240388 cache delete minikube-local-cache-test:functional-240388                                           │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │ 17 Dec 25 19:33 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │ 17 Dec 25 19:33 UTC │
	│ cache   │ list                                                                                                                 │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │ 17 Dec 25 19:33 UTC │
	│ ssh     │ functional-240388 ssh sudo crictl images                                                                             │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │ 17 Dec 25 19:33 UTC │
	│ ssh     │ functional-240388 ssh sudo docker rmi registry.k8s.io/pause:latest                                                   │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │ 17 Dec 25 19:33 UTC │
	│ ssh     │ functional-240388 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                              │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │                     │
	│ cache   │ functional-240388 cache reload                                                                                       │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │ 17 Dec 25 19:33 UTC │
	│ ssh     │ functional-240388 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                              │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │ 17 Dec 25 19:33 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │ 17 Dec 25 19:33 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │ 17 Dec 25 19:33 UTC │
	│ kubectl │ functional-240388 kubectl -- --context functional-240388 get pods                                                    │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │ 17 Dec 25 19:33 UTC │
	│ start   │ -p functional-240388 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all             │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 19:33:06
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 19:33:06.203503  267684 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:33:06.203782  267684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:33:06.203786  267684 out.go:374] Setting ErrFile to fd 2...
	I1217 19:33:06.203789  267684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:33:06.204003  267684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
	I1217 19:33:06.204492  267684 out.go:368] Setting JSON to false
	I1217 19:33:06.205478  267684 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4530,"bootTime":1765995456,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:33:06.205528  267684 start.go:143] virtualization: kvm guest
	I1217 19:33:06.207441  267684 out.go:179] * [functional-240388] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 19:33:06.208646  267684 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 19:33:06.208699  267684 notify.go:221] Checking for updates...
	I1217 19:33:06.211236  267684 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:33:06.212698  267684 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-255930/kubeconfig
	I1217 19:33:06.213817  267684 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-255930/.minikube
	I1217 19:33:06.215252  267684 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 19:33:06.216551  267684 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 19:33:06.218219  267684 config.go:182] Loaded profile config "functional-240388": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1217 19:33:06.218312  267684 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:33:06.251540  267684 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 19:33:06.252759  267684 start.go:309] selected driver: kvm2
	I1217 19:33:06.252769  267684 start.go:927] validating driver "kvm2" against &{Name:functional-240388 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-240388 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:33:06.252872  267684 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 19:33:06.253839  267684 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 19:33:06.253869  267684 cni.go:84] Creating CNI manager for ""
	I1217 19:33:06.253927  267684 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 19:33:06.253968  267684 start.go:353] cluster config:
	{Name:functional-240388 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-240388 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:33:06.254054  267684 iso.go:125] acquiring lock: {Name:mkeac5b890dbb93d0e36dd357fe6f0cc980f247e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:33:06.256031  267684 out.go:179] * Starting "functional-240388" primary control-plane node in "functional-240388" cluster
	I1217 19:33:06.257078  267684 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1217 19:33:06.257104  267684 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-255930/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1217 19:33:06.257110  267684 cache.go:65] Caching tarball of preloaded images
	I1217 19:33:06.257199  267684 preload.go:238] Found /home/jenkins/minikube-integration/22186-255930/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 19:33:06.257207  267684 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1217 19:33:06.257315  267684 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/config.json ...
	I1217 19:33:06.257529  267684 start.go:360] acquireMachinesLock for functional-240388: {Name:mkc3bc9f6c99eb74eb5c5fedf7f00499ebad23f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 19:33:06.257571  267684 start.go:364] duration metric: took 27.823µs to acquireMachinesLock for "functional-240388"
	I1217 19:33:06.257581  267684 start.go:96] Skipping create...Using existing machine configuration
	I1217 19:33:06.257585  267684 fix.go:54] fixHost starting: 
	I1217 19:33:06.259464  267684 fix.go:112] recreateIfNeeded on functional-240388: state=Running err=<nil>
	W1217 19:33:06.259480  267684 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 19:33:06.261165  267684 out.go:252] * Updating the running kvm2 "functional-240388" VM ...
	I1217 19:33:06.261187  267684 machine.go:94] provisionDockerMachine start ...
	I1217 19:33:06.263928  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.264385  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:06.264410  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.264635  267684 main.go:143] libmachine: Using SSH client type: native
	I1217 19:33:06.264883  267684 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1217 19:33:06.264889  267684 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 19:33:06.378717  267684 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-240388
	
	I1217 19:33:06.378738  267684 buildroot.go:166] provisioning hostname "functional-240388"
	I1217 19:33:06.382239  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.382773  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:06.382796  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.383019  267684 main.go:143] libmachine: Using SSH client type: native
	I1217 19:33:06.383275  267684 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1217 19:33:06.383283  267684 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-240388 && echo "functional-240388" | sudo tee /etc/hostname
	I1217 19:33:06.513472  267684 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-240388
	
	I1217 19:33:06.516442  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.516888  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:06.516905  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.517149  267684 main.go:143] libmachine: Using SSH client type: native
	I1217 19:33:06.517343  267684 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1217 19:33:06.517355  267684 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-240388' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-240388/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-240388' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 19:33:06.629940  267684 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 19:33:06.629963  267684 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22186-255930/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-255930/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-255930/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-255930/.minikube}
	I1217 19:33:06.630013  267684 buildroot.go:174] setting up certificates
	I1217 19:33:06.630022  267684 provision.go:84] configureAuth start
	I1217 19:33:06.632827  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.633218  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:06.633234  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.635673  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.636028  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:06.636051  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.636175  267684 provision.go:143] copyHostCerts
	I1217 19:33:06.636228  267684 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-255930/.minikube/ca.pem, removing ...
	I1217 19:33:06.636238  267684 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-255930/.minikube/ca.pem
	I1217 19:33:06.636309  267684 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-255930/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-255930/.minikube/ca.pem (1082 bytes)
	I1217 19:33:06.636400  267684 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-255930/.minikube/cert.pem, removing ...
	I1217 19:33:06.636404  267684 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-255930/.minikube/cert.pem
	I1217 19:33:06.636429  267684 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-255930/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-255930/.minikube/cert.pem (1123 bytes)
	I1217 19:33:06.636482  267684 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-255930/.minikube/key.pem, removing ...
	I1217 19:33:06.636485  267684 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-255930/.minikube/key.pem
	I1217 19:33:06.636506  267684 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-255930/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-255930/.minikube/key.pem (1675 bytes)
	I1217 19:33:06.636551  267684 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-255930/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-255930/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-255930/.minikube/certs/ca-key.pem org=jenkins.functional-240388 san=[127.0.0.1 192.168.39.22 functional-240388 localhost minikube]
	I1217 19:33:06.786573  267684 provision.go:177] copyRemoteCerts
	I1217 19:33:06.786659  267684 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 19:33:06.789975  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.790330  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:06.790345  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.790477  267684 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/functional-240388/id_rsa Username:docker}
	I1217 19:33:06.879675  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 19:33:06.911239  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 19:33:06.942304  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 19:33:06.972692  267684 provision.go:87] duration metric: took 342.657497ms to configureAuth
	I1217 19:33:06.972712  267684 buildroot.go:189] setting minikube options for container-runtime
	I1217 19:33:06.972901  267684 config.go:182] Loaded profile config "functional-240388": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1217 19:33:06.975759  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.976128  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:06.976144  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.976309  267684 main.go:143] libmachine: Using SSH client type: native
	I1217 19:33:06.976500  267684 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1217 19:33:06.976505  267684 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 19:33:07.088739  267684 main.go:143] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1217 19:33:07.088756  267684 buildroot.go:70] root file system type: tmpfs
	I1217 19:33:07.088852  267684 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 19:33:07.092317  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.092778  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:07.092795  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.092955  267684 main.go:143] libmachine: Using SSH client type: native
	I1217 19:33:07.093202  267684 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1217 19:33:07.093245  267684 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 19:33:07.228187  267684 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 19:33:07.231148  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.231515  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:07.231529  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.231733  267684 main.go:143] libmachine: Using SSH client type: native
	I1217 19:33:07.231924  267684 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1217 19:33:07.231933  267684 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 19:33:07.348208  267684 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 19:33:07.348223  267684 machine.go:97] duration metric: took 1.087029537s to provisionDockerMachine
	I1217 19:33:07.348235  267684 start.go:293] postStartSetup for "functional-240388" (driver="kvm2")
	I1217 19:33:07.348246  267684 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 19:33:07.348303  267684 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 19:33:07.351188  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.351680  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:07.351698  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.351844  267684 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/functional-240388/id_rsa Username:docker}
	I1217 19:33:07.437905  267684 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 19:33:07.443173  267684 info.go:137] Remote host: Buildroot 2025.02
	I1217 19:33:07.443192  267684 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-255930/.minikube/addons for local assets ...
	I1217 19:33:07.443261  267684 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-255930/.minikube/files for local assets ...
	I1217 19:33:07.443368  267684 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-255930/.minikube/files/etc/ssl/certs/2599852.pem -> 2599852.pem in /etc/ssl/certs
	I1217 19:33:07.443455  267684 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-255930/.minikube/files/etc/test/nested/copy/259985/hosts -> hosts in /etc/test/nested/copy/259985
	I1217 19:33:07.443494  267684 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/259985
	I1217 19:33:07.456195  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/files/etc/ssl/certs/2599852.pem --> /etc/ssl/certs/2599852.pem (1708 bytes)
	I1217 19:33:07.487969  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/files/etc/test/nested/copy/259985/hosts --> /etc/test/nested/copy/259985/hosts (40 bytes)
	I1217 19:33:07.519505  267684 start.go:296] duration metric: took 171.253835ms for postStartSetup
	I1217 19:33:07.519546  267684 fix.go:56] duration metric: took 1.261959532s for fixHost
	I1217 19:33:07.522654  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.523039  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:07.523063  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.523261  267684 main.go:143] libmachine: Using SSH client type: native
	I1217 19:33:07.523466  267684 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1217 19:33:07.523470  267684 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 19:33:07.636250  267684 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765999987.632110773
	
	I1217 19:33:07.636266  267684 fix.go:216] guest clock: 1765999987.632110773
	I1217 19:33:07.636274  267684 fix.go:229] Guest: 2025-12-17 19:33:07.632110773 +0000 UTC Remote: 2025-12-17 19:33:07.519549795 +0000 UTC m=+1.366822896 (delta=112.560978ms)
	I1217 19:33:07.636297  267684 fix.go:200] guest clock delta is within tolerance: 112.560978ms
	I1217 19:33:07.636302  267684 start.go:83] releasing machines lock for "functional-240388", held for 1.378724961s
	I1217 19:33:07.639671  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.640215  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:07.640235  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.640830  267684 ssh_runner.go:195] Run: cat /version.json
	I1217 19:33:07.640915  267684 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 19:33:07.643978  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.644315  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:07.644329  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.644334  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.644500  267684 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/functional-240388/id_rsa Username:docker}
	I1217 19:33:07.644911  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:07.644934  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.645119  267684 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/functional-240388/id_rsa Username:docker}
	I1217 19:33:07.727454  267684 ssh_runner.go:195] Run: systemctl --version
	I1217 19:33:07.761328  267684 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 19:33:07.768413  267684 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 19:33:07.768480  267684 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 19:33:07.781436  267684 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 19:33:07.781462  267684 start.go:496] detecting cgroup driver to use...
	I1217 19:33:07.781587  267684 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 19:33:07.808240  267684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 19:33:07.822696  267684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 19:33:07.836690  267684 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 19:33:07.836752  267684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 19:33:07.850854  267684 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 19:33:07.865319  267684 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 19:33:07.881786  267684 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 19:33:07.896674  267684 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 19:33:07.913883  267684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 19:33:07.928739  267684 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 19:33:07.943427  267684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 19:33:07.958124  267684 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 19:33:07.969975  267684 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 19:33:07.983204  267684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:33:08.188650  267684 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 19:33:08.241722  267684 start.go:496] detecting cgroup driver to use...
	I1217 19:33:08.241799  267684 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 19:33:08.261139  267684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 19:33:08.279259  267684 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 19:33:08.309361  267684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 19:33:08.326823  267684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 19:33:08.343826  267684 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 19:33:08.368675  267684 ssh_runner.go:195] Run: which cri-dockerd
	I1217 19:33:08.373233  267684 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 19:33:08.386017  267684 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 19:33:08.407873  267684 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 19:33:08.615617  267684 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 19:33:08.826661  267684 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 19:33:08.826828  267684 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 19:33:08.852952  267684 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 19:33:08.869029  267684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:33:09.065134  267684 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 19:33:40.054438  267684 ssh_runner.go:235] Completed: sudo systemctl restart docker: (30.989264208s)
	I1217 19:33:40.054535  267684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 19:33:40.092868  267684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 19:33:40.125996  267684 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1217 19:33:40.170501  267684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 19:33:40.189408  267684 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 19:33:40.345265  267684 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 19:33:40.504425  267684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:33:40.661370  267684 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 19:33:40.704620  267684 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 19:33:40.720078  267684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:33:40.910684  267684 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 19:33:41.031296  267684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 19:33:41.051233  267684 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 19:33:41.051302  267684 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 19:33:41.057808  267684 start.go:564] Will wait 60s for crictl version
	I1217 19:33:41.057880  267684 ssh_runner.go:195] Run: which crictl
	I1217 19:33:41.062048  267684 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 19:33:41.095492  267684 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.2
	RuntimeApiVersion:  v1
	I1217 19:33:41.095556  267684 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 19:33:41.122830  267684 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 19:33:41.151134  267684 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 28.5.2 ...
	I1217 19:33:41.154049  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:41.154487  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:41.154506  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:41.154685  267684 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1217 19:33:41.161212  267684 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 19:33:41.162920  267684 kubeadm.go:884] updating cluster {Name:functional-240388 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0-rc.1 ClusterName:functional-240388 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
tString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 19:33:41.163089  267684 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1217 19:33:41.163139  267684 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 19:33:41.191761  267684 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-240388
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1217 19:33:41.191775  267684 docker.go:621] Images already preloaded, skipping extraction
	I1217 19:33:41.191834  267684 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 19:33:41.239915  267684 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-240388
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1217 19:33:41.239930  267684 cache_images.go:86] Images are preloaded, skipping loading
	I1217 19:33:41.239939  267684 kubeadm.go:935] updating node { 192.168.39.22 8441 v1.35.0-rc.1 docker true true} ...
	I1217 19:33:41.240072  267684 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-240388 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-240388 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 19:33:41.240179  267684 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 19:33:41.426914  267684 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 19:33:41.426938  267684 cni.go:84] Creating CNI manager for ""
	I1217 19:33:41.426957  267684 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 19:33:41.426971  267684 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 19:33:41.426995  267684 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.22 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-240388 NodeName:functional-240388 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletCon
figOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 19:33:41.427126  267684 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.22
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-240388"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.22"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.22"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 19:33:41.427217  267684 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 19:33:41.453808  267684 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 19:33:41.453878  267684 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 19:33:41.474935  267684 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1217 19:33:41.530294  267684 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 19:33:41.601303  267684 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I1217 19:33:41.724558  267684 ssh_runner.go:195] Run: grep 192.168.39.22	control-plane.minikube.internal$ /etc/hosts
	I1217 19:33:41.735088  267684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:33:42.073369  267684 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 19:33:42.122754  267684 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388 for IP: 192.168.39.22
	I1217 19:33:42.122769  267684 certs.go:195] generating shared ca certs ...
	I1217 19:33:42.122787  267684 certs.go:227] acquiring lock for ca certs: {Name:mk41d44cf7495c219db6c5af86332dabe9b164c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:33:42.122952  267684 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-255930/.minikube/ca.key
	I1217 19:33:42.122986  267684 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-255930/.minikube/proxy-client-ca.key
	I1217 19:33:42.122993  267684 certs.go:257] generating profile certs ...
	I1217 19:33:42.123066  267684 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.key
	I1217 19:33:42.123140  267684 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/apiserver.key.69fe0bcf
	I1217 19:33:42.123174  267684 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/proxy-client.key
	I1217 19:33:42.123282  267684 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-255930/.minikube/certs/259985.pem (1338 bytes)
	W1217 19:33:42.123309  267684 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-255930/.minikube/certs/259985_empty.pem, impossibly tiny 0 bytes
	I1217 19:33:42.123314  267684 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-255930/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 19:33:42.123336  267684 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-255930/.minikube/certs/ca.pem (1082 bytes)
	I1217 19:33:42.123355  267684 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-255930/.minikube/certs/cert.pem (1123 bytes)
	I1217 19:33:42.123374  267684 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-255930/.minikube/certs/key.pem (1675 bytes)
	I1217 19:33:42.123410  267684 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-255930/.minikube/files/etc/ssl/certs/2599852.pem (1708 bytes)
	I1217 19:33:42.123979  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 19:33:42.258305  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 19:33:42.322504  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 19:33:42.483405  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 19:33:42.625002  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 19:33:42.692761  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 19:33:42.745086  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 19:33:42.793780  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 19:33:42.841690  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/files/etc/ssl/certs/2599852.pem --> /usr/share/ca-certificates/2599852.pem (1708 bytes)
	I1217 19:33:42.891346  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 19:33:42.933420  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/certs/259985.pem --> /usr/share/ca-certificates/259985.pem (1338 bytes)
	I1217 19:33:42.962368  267684 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 19:33:42.984665  267684 ssh_runner.go:195] Run: openssl version
	I1217 19:33:42.992262  267684 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/259985.pem
	I1217 19:33:43.005810  267684 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/259985.pem /etc/ssl/certs/259985.pem
	I1217 19:33:43.028391  267684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259985.pem
	I1217 19:33:43.034079  267684 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:31 /usr/share/ca-certificates/259985.pem
	I1217 19:33:43.034138  267684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259985.pem
	I1217 19:33:43.041611  267684 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 19:33:43.053628  267684 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2599852.pem
	I1217 19:33:43.065540  267684 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2599852.pem /etc/ssl/certs/2599852.pem
	I1217 19:33:43.076857  267684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2599852.pem
	I1217 19:33:43.082352  267684 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:31 /usr/share/ca-certificates/2599852.pem
	I1217 19:33:43.082399  267684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2599852.pem
	I1217 19:33:43.089508  267684 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 19:33:43.101894  267684 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:33:43.113900  267684 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 19:33:43.126463  267684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:33:43.131738  267684 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:20 /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:33:43.131806  267684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:33:43.139221  267684 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 19:33:43.151081  267684 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 19:33:43.156496  267684 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 19:33:43.163542  267684 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 19:33:43.171156  267684 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 19:33:43.179401  267684 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 19:33:43.187058  267684 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 19:33:43.194355  267684 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 19:33:43.201350  267684 kubeadm.go:401] StartCluster: {Name:functional-240388 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0-rc.1 ClusterName:functional-240388 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountSt
ring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:33:43.201471  267684 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 19:33:43.218924  267684 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 19:33:43.230913  267684 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 19:33:43.230923  267684 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 19:33:43.230973  267684 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 19:33:43.242368  267684 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 19:33:43.242866  267684 kubeconfig.go:125] found "functional-240388" server: "https://192.168.39.22:8441"
	I1217 19:33:43.243982  267684 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 19:33:43.254672  267684 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.39.22"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1217 19:33:43.254681  267684 kubeadm.go:1161] stopping kube-system containers ...
	I1217 19:33:43.254745  267684 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 19:33:43.275954  267684 docker.go:484] Stopping containers: [a17ea1c89782 8512c4ee5234 6a5939c9502d 1acc72667ffe 9cac3f35da9f 719551455fba 192352b42712 67f00add7f90 cf40faa6d26b 2eee0e13328f 16980b72586c ca0a9d83ba38 9b648a420d5f 4af0a786a194 287f41e5445c bb348be6d197 148616d57564 2a94d92ddfbf a7ce08614779 7a5020f70312 9fca1633c22a 54deab0a7e37 bfcb4221d4a7 7b61216a620a 5179f7af9585 70c515d79623 de691c17fad0 c9f3d097d04d bc39802d6918 1c77d49437e1 4953e0b7245e]
	I1217 19:33:43.276067  267684 ssh_runner.go:195] Run: docker stop a17ea1c89782 8512c4ee5234 6a5939c9502d 1acc72667ffe 9cac3f35da9f 719551455fba 192352b42712 67f00add7f90 cf40faa6d26b 2eee0e13328f 16980b72586c ca0a9d83ba38 9b648a420d5f 4af0a786a194 287f41e5445c bb348be6d197 148616d57564 2a94d92ddfbf a7ce08614779 7a5020f70312 9fca1633c22a 54deab0a7e37 bfcb4221d4a7 7b61216a620a 5179f7af9585 70c515d79623 de691c17fad0 c9f3d097d04d bc39802d6918 1c77d49437e1 4953e0b7245e
	I1217 19:33:43.624025  267684 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 19:33:43.676356  267684 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 19:33:43.689348  267684 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 17 19:31 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5637 Dec 17 19:32 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5677 Dec 17 19:32 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5585 Dec 17 19:32 /etc/kubernetes/scheduler.conf
	
	I1217 19:33:43.689427  267684 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 19:33:43.700899  267684 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 19:33:43.712072  267684 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 19:33:43.712160  267684 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 19:33:43.724140  267684 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 19:33:43.735067  267684 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 19:33:43.735130  267684 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 19:33:43.746444  267684 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 19:33:43.757136  267684 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 19:33:43.757188  267684 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 19:33:43.768689  267684 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 19:33:43.780126  267684 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 19:33:43.830635  267684 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 19:33:44.318201  267684 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 19:33:44.586483  267684 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 19:33:44.644239  267684 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 19:33:44.723984  267684 api_server.go:52] waiting for apiserver process to appear ...
	I1217 19:33:44.724052  267684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:33:45.225085  267684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:33:45.725212  267684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:33:46.225040  267684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:33:46.263588  267684 api_server.go:72] duration metric: took 1.539620704s to wait for apiserver process to appear ...
	I1217 19:33:46.263624  267684 api_server.go:88] waiting for apiserver healthz status ...
	I1217 19:33:46.263642  267684 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8441/healthz ...
	I1217 19:33:48.060178  267684 api_server.go:279] https://192.168.39.22:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 19:33:48.060199  267684 api_server.go:103] status: https://192.168.39.22:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 19:33:48.060212  267684 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8441/healthz ...
	I1217 19:33:48.082825  267684 api_server.go:279] https://192.168.39.22:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 19:33:48.082853  267684 api_server.go:103] status: https://192.168.39.22:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 19:33:48.264243  267684 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8441/healthz ...
	I1217 19:33:48.270123  267684 api_server.go:279] https://192.168.39.22:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 19:33:48.270140  267684 api_server.go:103] status: https://192.168.39.22:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 19:33:48.763703  267684 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8441/healthz ...
	I1217 19:33:48.771293  267684 api_server.go:279] https://192.168.39.22:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 19:33:48.771313  267684 api_server.go:103] status: https://192.168.39.22:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 19:33:49.263841  267684 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8441/healthz ...
	I1217 19:33:49.285040  267684 api_server.go:279] https://192.168.39.22:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 19:33:49.285061  267684 api_server.go:103] status: https://192.168.39.22:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 19:33:49.764778  267684 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8441/healthz ...
	I1217 19:33:49.771574  267684 api_server.go:279] https://192.168.39.22:8441/healthz returned 200:
	ok
	I1217 19:33:49.779794  267684 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 19:33:49.779830  267684 api_server.go:131] duration metric: took 3.516200098s to wait for apiserver health ...
	I1217 19:33:49.779839  267684 cni.go:84] Creating CNI manager for ""
	I1217 19:33:49.779849  267684 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 19:33:49.781831  267684 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1217 19:33:49.783461  267684 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1217 19:33:49.809372  267684 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1217 19:33:49.861811  267684 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 19:33:49.866819  267684 system_pods.go:59] 7 kube-system pods found
	I1217 19:33:49.866868  267684 system_pods.go:61] "coredns-7d764666f9-p2jc7" [463dfe4a-5f2b-4d8b-969f-3288b215bcba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:33:49.866878  267684 system_pods.go:61] "etcd-functional-240388" [b25d5f2b-38a8-43f6-a9ca-650e1080eddf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 19:33:49.866884  267684 system_pods.go:61] "kube-apiserver-functional-240388" [f6453f94-5276-4e95-9449-699193d4b24c] Pending
	I1217 19:33:49.866893  267684 system_pods.go:61] "kube-controller-manager-functional-240388" [0582fe42-e649-424f-8850-7fbbffcaa22e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 19:33:49.866903  267684 system_pods.go:61] "kube-proxy-9b4xt" [74afb855-c8bc-4697-ae99-f445db36b930] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 19:33:49.866909  267684 system_pods.go:61] "kube-scheduler-functional-240388" [40e6e45c-16f3-41e6-81ea-3e8b63efbd54] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 19:33:49.866913  267684 system_pods.go:61] "storage-provisioner" [377236c5-a7a8-4bb5-834d-3140d3393035] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:33:49.866920  267684 system_pods.go:74] duration metric: took 5.094555ms to wait for pod list to return data ...
	I1217 19:33:49.866929  267684 node_conditions.go:102] verifying NodePressure condition ...
	I1217 19:33:49.872686  267684 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1217 19:33:49.872705  267684 node_conditions.go:123] node cpu capacity is 2
	I1217 19:33:49.872722  267684 node_conditions.go:105] duration metric: took 5.787969ms to run NodePressure ...
	I1217 19:33:49.872783  267684 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 19:33:50.179374  267684 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1217 19:33:50.182281  267684 kubeadm.go:744] kubelet initialised
	I1217 19:33:50.182292  267684 kubeadm.go:745] duration metric: took 2.903208ms waiting for restarted kubelet to initialise ...
	I1217 19:33:50.182307  267684 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 19:33:50.201867  267684 ops.go:34] apiserver oom_adj: -16
	I1217 19:33:50.201881  267684 kubeadm.go:602] duration metric: took 6.970952997s to restartPrimaryControlPlane
	I1217 19:33:50.201892  267684 kubeadm.go:403] duration metric: took 7.000554069s to StartCluster
	I1217 19:33:50.201919  267684 settings.go:142] acquiring lock: {Name:mk9bce2c5cb192383c5c2d74365fff53c608cc17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:33:50.202011  267684 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-255930/kubeconfig
	I1217 19:33:50.203049  267684 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-255930/kubeconfig: {Name:mk8f63919c382cf8d5b565d23aa50d046bd25197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:33:50.203354  267684 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 19:33:50.203438  267684 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 19:33:50.203528  267684 addons.go:70] Setting storage-provisioner=true in profile "functional-240388"
	I1217 19:33:50.203546  267684 addons.go:239] Setting addon storage-provisioner=true in "functional-240388"
	W1217 19:33:50.203553  267684 addons.go:248] addon storage-provisioner should already be in state true
	I1217 19:33:50.203581  267684 host.go:66] Checking if "functional-240388" exists ...
	I1217 19:33:50.203581  267684 config.go:182] Loaded profile config "functional-240388": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1217 19:33:50.203574  267684 addons.go:70] Setting default-storageclass=true in profile "functional-240388"
	I1217 19:33:50.203616  267684 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-240388"
	I1217 19:33:50.206087  267684 addons.go:239] Setting addon default-storageclass=true in "functional-240388"
	W1217 19:33:50.206095  267684 addons.go:248] addon default-storageclass should already be in state true
	I1217 19:33:50.206113  267684 host.go:66] Checking if "functional-240388" exists ...
	I1217 19:33:50.207362  267684 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 19:33:50.207371  267684 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 19:33:50.209580  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:50.209815  267684 out.go:179] * Verifying Kubernetes components...
	I1217 19:33:50.209822  267684 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 19:33:50.209953  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:50.209970  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:50.210109  267684 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/functional-240388/id_rsa Username:docker}
	I1217 19:33:50.211031  267684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:33:50.211042  267684 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 19:33:50.211049  267684 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 19:33:50.213015  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:50.213326  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:50.213338  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:50.213455  267684 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/functional-240388/id_rsa Username:docker}
	I1217 19:33:50.474258  267684 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 19:33:50.505661  267684 node_ready.go:35] waiting up to 6m0s for node "functional-240388" to be "Ready" ...
	I1217 19:33:50.509464  267684 node_ready.go:49] node "functional-240388" is "Ready"
	I1217 19:33:50.509481  267684 node_ready.go:38] duration metric: took 3.795113ms for node "functional-240388" to be "Ready" ...
	I1217 19:33:50.509497  267684 api_server.go:52] waiting for apiserver process to appear ...
	I1217 19:33:50.509549  267684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:33:50.534536  267684 api_server.go:72] duration metric: took 331.145216ms to wait for apiserver process to appear ...
	I1217 19:33:50.534563  267684 api_server.go:88] waiting for apiserver healthz status ...
	I1217 19:33:50.534581  267684 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8441/healthz ...
	I1217 19:33:50.548651  267684 api_server.go:279] https://192.168.39.22:8441/healthz returned 200:
	ok
	I1217 19:33:50.550644  267684 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 19:33:50.550659  267684 api_server.go:131] duration metric: took 16.091059ms to wait for apiserver health ...
	I1217 19:33:50.550667  267684 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 19:33:50.565804  267684 system_pods.go:59] 7 kube-system pods found
	I1217 19:33:50.565825  267684 system_pods.go:61] "coredns-7d764666f9-p2jc7" [463dfe4a-5f2b-4d8b-969f-3288b215bcba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:33:50.565830  267684 system_pods.go:61] "etcd-functional-240388" [b25d5f2b-38a8-43f6-a9ca-650e1080eddf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 19:33:50.565834  267684 system_pods.go:61] "kube-apiserver-functional-240388" [f6453f94-5276-4e95-9449-699193d4b24c] Pending
	I1217 19:33:50.565838  267684 system_pods.go:61] "kube-controller-manager-functional-240388" [0582fe42-e649-424f-8850-7fbbffcaa22e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 19:33:50.565842  267684 system_pods.go:61] "kube-proxy-9b4xt" [74afb855-c8bc-4697-ae99-f445db36b930] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 19:33:50.565846  267684 system_pods.go:61] "kube-scheduler-functional-240388" [40e6e45c-16f3-41e6-81ea-3e8b63efbd54] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 19:33:50.565850  267684 system_pods.go:61] "storage-provisioner" [377236c5-a7a8-4bb5-834d-3140d3393035] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:33:50.565855  267684 system_pods.go:74] duration metric: took 15.183886ms to wait for pod list to return data ...
	I1217 19:33:50.565862  267684 default_sa.go:34] waiting for default service account to be created ...
	I1217 19:33:50.570724  267684 default_sa.go:45] found service account: "default"
	I1217 19:33:50.570738  267684 default_sa.go:55] duration metric: took 4.870957ms for default service account to be created ...
	I1217 19:33:50.570746  267684 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 19:33:50.574791  267684 system_pods.go:86] 7 kube-system pods found
	I1217 19:33:50.574808  267684 system_pods.go:89] "coredns-7d764666f9-p2jc7" [463dfe4a-5f2b-4d8b-969f-3288b215bcba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:33:50.574815  267684 system_pods.go:89] "etcd-functional-240388" [b25d5f2b-38a8-43f6-a9ca-650e1080eddf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 19:33:50.574820  267684 system_pods.go:89] "kube-apiserver-functional-240388" [f6453f94-5276-4e95-9449-699193d4b24c] Pending
	I1217 19:33:50.574825  267684 system_pods.go:89] "kube-controller-manager-functional-240388" [0582fe42-e649-424f-8850-7fbbffcaa22e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 19:33:50.574829  267684 system_pods.go:89] "kube-proxy-9b4xt" [74afb855-c8bc-4697-ae99-f445db36b930] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 19:33:50.574833  267684 system_pods.go:89] "kube-scheduler-functional-240388" [40e6e45c-16f3-41e6-81ea-3e8b63efbd54] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 19:33:50.574836  267684 system_pods.go:89] "storage-provisioner" [377236c5-a7a8-4bb5-834d-3140d3393035] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:33:50.574857  267684 retry.go:31] will retry after 274.467854ms: missing components: kube-apiserver
	I1217 19:33:50.586184  267684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 19:33:50.600769  267684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 19:33:50.854208  267684 system_pods.go:86] 7 kube-system pods found
	I1217 19:33:50.854237  267684 system_pods.go:89] "coredns-7d764666f9-p2jc7" [463dfe4a-5f2b-4d8b-969f-3288b215bcba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:33:50.854243  267684 system_pods.go:89] "etcd-functional-240388" [b25d5f2b-38a8-43f6-a9ca-650e1080eddf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 19:33:50.854253  267684 system_pods.go:89] "kube-apiserver-functional-240388" [f6453f94-5276-4e95-9449-699193d4b24c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 19:33:50.854261  267684 system_pods.go:89] "kube-controller-manager-functional-240388" [0582fe42-e649-424f-8850-7fbbffcaa22e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 19:33:50.854267  267684 system_pods.go:89] "kube-proxy-9b4xt" [74afb855-c8bc-4697-ae99-f445db36b930] Running
	I1217 19:33:50.854274  267684 system_pods.go:89] "kube-scheduler-functional-240388" [40e6e45c-16f3-41e6-81ea-3e8b63efbd54] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 19:33:50.854278  267684 system_pods.go:89] "storage-provisioner" [377236c5-a7a8-4bb5-834d-3140d3393035] Running
	I1217 19:33:50.854286  267684 system_pods.go:126] duration metric: took 283.535465ms to wait for k8s-apps to be running ...
	I1217 19:33:50.854295  267684 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 19:33:50.854411  267684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:33:51.435751  267684 system_svc.go:56] duration metric: took 581.446ms WaitForService to wait for kubelet
	I1217 19:33:51.435770  267684 kubeadm.go:587] duration metric: took 1.23238848s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 19:33:51.435787  267684 node_conditions.go:102] verifying NodePressure condition ...
	I1217 19:33:51.439212  267684 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1217 19:33:51.439232  267684 node_conditions.go:123] node cpu capacity is 2
	I1217 19:33:51.439249  267684 node_conditions.go:105] duration metric: took 3.454979ms to run NodePressure ...
	I1217 19:33:51.439262  267684 start.go:242] waiting for startup goroutines ...
	I1217 19:33:51.444332  267684 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 19:33:51.445791  267684 addons.go:530] duration metric: took 1.242367945s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 19:33:51.445830  267684 start.go:247] waiting for cluster config update ...
	I1217 19:33:51.445844  267684 start.go:256] writing updated cluster config ...
	I1217 19:33:51.446209  267684 ssh_runner.go:195] Run: rm -f paused
	I1217 19:33:51.452231  267684 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 19:33:51.457011  267684 pod_ready.go:83] waiting for pod "coredns-7d764666f9-p2jc7" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 19:33:53.464153  267684 pod_ready.go:104] pod "coredns-7d764666f9-p2jc7" is not "Ready", error: <nil>
	W1217 19:33:55.962739  267684 pod_ready.go:104] pod "coredns-7d764666f9-p2jc7" is not "Ready", error: <nil>
	I1217 19:33:57.974056  267684 pod_ready.go:94] pod "coredns-7d764666f9-p2jc7" is "Ready"
	I1217 19:33:57.974098  267684 pod_ready.go:86] duration metric: took 6.517047688s for pod "coredns-7d764666f9-p2jc7" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:33:57.977362  267684 pod_ready.go:83] waiting for pod "etcd-functional-240388" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:33:59.983912  267684 pod_ready.go:94] pod "etcd-functional-240388" is "Ready"
	I1217 19:33:59.983929  267684 pod_ready.go:86] duration metric: took 2.00655136s for pod "etcd-functional-240388" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:33:59.986470  267684 pod_ready.go:83] waiting for pod "kube-apiserver-functional-240388" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:33:59.990980  267684 pod_ready.go:94] pod "kube-apiserver-functional-240388" is "Ready"
	I1217 19:33:59.991000  267684 pod_ready.go:86] duration metric: took 4.511047ms for pod "kube-apiserver-functional-240388" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:33:59.993459  267684 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-240388" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:33:59.997937  267684 pod_ready.go:94] pod "kube-controller-manager-functional-240388" is "Ready"
	I1217 19:33:59.997954  267684 pod_ready.go:86] duration metric: took 4.482221ms for pod "kube-controller-manager-functional-240388" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:34:00.000145  267684 pod_ready.go:83] waiting for pod "kube-proxy-9b4xt" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:34:00.361563  267684 pod_ready.go:94] pod "kube-proxy-9b4xt" is "Ready"
	I1217 19:34:00.361586  267684 pod_ready.go:86] duration metric: took 361.42797ms for pod "kube-proxy-9b4xt" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:34:00.561139  267684 pod_ready.go:83] waiting for pod "kube-scheduler-functional-240388" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 19:34:02.566440  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:04.567079  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:06.568433  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:09.069006  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:11.568213  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:14.067100  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:16.067895  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:18.068093  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:20.568800  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:23.067669  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:25.068181  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:27.069140  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:29.567682  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:32.067944  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:34.567970  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:36.568410  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:38.568875  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:41.067285  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:43.067582  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:45.068991  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:47.568211  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:50.067798  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:52.567110  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:55.068867  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:57.567142  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:59.567565  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:02.067195  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:04.067302  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:06.068868  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:08.568773  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:11.067299  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:13.067770  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:15.567220  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:17.567959  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:20.067787  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:22.068724  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:24.568343  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:26.568770  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:29.067710  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:31.068145  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:33.568905  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:36.067452  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:38.567405  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:41.067223  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:43.068236  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:45.566660  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:47.568023  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:50.067628  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:52.067691  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:54.566681  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:56.567070  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:58.567205  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:00.567585  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:03.066911  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:05.067697  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:07.567267  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:10.070065  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:12.567282  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:15.067336  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:17.067923  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:19.068236  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:21.567784  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:23.568193  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:26.068627  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:28.568621  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:31.067696  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:33.067753  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:35.567389  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:37.568393  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:40.067868  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:42.068526  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:44.568754  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:47.066209  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:49.067317  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:51.067920  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:53.067951  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:55.568233  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:58.067178  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:00.568182  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:02.568983  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:05.068318  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:07.567541  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:10.068359  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:12.567929  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:15.067934  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:17.568019  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:19.568147  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:22.067853  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:24.568939  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:27.067623  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:29.068086  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:31.068905  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:33.567497  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:35.571222  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:38.068303  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:40.566716  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:42.567116  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:45.066812  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:47.069637  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:49.567275  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	I1217 19:37:51.452903  267684 pod_ready.go:86] duration metric: took 3m50.891734106s for pod "kube-scheduler-functional-240388" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 19:37:51.452931  267684 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-scheduler" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1217 19:37:51.452945  267684 pod_ready.go:40] duration metric: took 4m0.000689077s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 19:37:51.455139  267684 out.go:203] 
	W1217 19:37:51.456702  267684 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1217 19:37:51.457988  267684 out.go:203] 
	
	
	==> Docker <==
	Dec 17 19:33:43 functional-240388 dockerd[7924]: time="2025-12-17T19:33:43.473198163Z" level=info msg="ignoring event" container=6a5939c9502dbcd502681138b069c09eae18b9d8938bb170983b90cf6c57d283 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 17 19:33:44 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:44Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-p2jc7_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"1acc72667ffecb9b1f9c1462cd1503c88f603f0996e69135f49a6c923e49ea3e\""
	Dec 17 19:33:45 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:45Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"bfcb4221d4a7ef30dae2a66cd597250594cae53eecf1156b58fc897c3db4adb2\". Proceed without further sandbox information."
	Dec 17 19:33:45 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:45Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"bc39802d69185bb3fe4387bc88803990af8a467199a407d8ff3139edd897ad31\". Proceed without further sandbox information."
	Dec 17 19:33:45 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:45Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"de691c17fad0893ec8eda10a693c32635b6669ce118c79ef0ba02c521d246106\". Proceed without further sandbox information."
	Dec 17 19:33:45 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:45Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"a7ce08614779dcde6425e02aa30ac8921f91945d930f48ac401c8f72dfd73c97\". Proceed without further sandbox information."
	Dec 17 19:33:45 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:45Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"c9f3d097d04d9a9046d428a65af3812c3ab56cc0bffba2a9ad0f66a88bfc4afa\". Proceed without further sandbox information."
	Dec 17 19:33:45 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/99980b89e2f75344098f5ed8a2c5d8550ce5ecb31a633fbe4a58540f6365e83f/resolv.conf as [nameserver 192.168.122.1]"
	Dec 17 19:33:45 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b52517caf11d99d0e709e8bb4270cba47fe414e0d2abdc432798bc350b43f823/resolv.conf as [nameserver 192.168.122.1]"
	Dec 17 19:33:45 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2eee46eecde457c617f877978293fdd4990ba0015ac72f57962f009d7372900e/resolv.conf as [nameserver 192.168.122.1]"
	Dec 17 19:33:45 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c56608b9e4727946ea7e1d159d57918d83a65898cb2bbe22e2de60e264d7921a/resolv.conf as [nameserver 192.168.122.1]"
	Dec 17 19:33:45 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:45Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-p2jc7_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"1acc72667ffecb9b1f9c1462cd1503c88f603f0996e69135f49a6c923e49ea3e\""
	Dec 17 19:33:46 functional-240388 dockerd[7924]: time="2025-12-17T19:33:46.635036806Z" level=info msg="ignoring event" container=9ddcfd03def09fa056533c5127dfcbdc2e222868d94c39bba44f3d1b1c432fdb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 17 19:33:48 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:48Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Dec 17 19:33:49 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a51d46f16da861c5c8c4221920bfa125a4f956a42f6e7194de3b5d72ba8aa080/resolv.conf as [nameserver 192.168.122.1]"
	Dec 17 19:33:49 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3c1a68a5a63f99b2b353b54a2893bc8ba4f54b12c5da116a989ab7f96dcc78cb/resolv.conf as [nameserver 192.168.122.1]"
	Dec 17 19:33:49 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/28fbe3560cc6992430231caf18aa8e1adaa0a1b5412dbfe4e46274db26a4284a/resolv.conf as [nameserver 192.168.122.1]"
	Dec 17 19:33:58 functional-240388 dockerd[7924]: time="2025-12-17T19:33:58.156995422Z" level=info msg="ignoring event" container=df58efeab0f3eac3bed6fbf1084e04f8510bc53762ed05d42fdf793c4585f427 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 17 19:34:25 functional-240388 dockerd[7924]: time="2025-12-17T19:34:25.215727670Z" level=info msg="ignoring event" container=d78244b593944cd7e0e4f16b9b888f48bc7129b768d4e5e0bf62b079bec06dce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 17 19:34:54 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:34:54Z" level=error msg="error getting RW layer size for container ID '287f41e5445c4de7ef61e7b0c9e3722323849e5b526ad5d9226062c00d21909b': Error response from daemon: No such container: 287f41e5445c4de7ef61e7b0c9e3722323849e5b526ad5d9226062c00d21909b"
	Dec 17 19:34:54 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:34:54Z" level=error msg="Set backoffDuration to : 1m0s for container ID '287f41e5445c4de7ef61e7b0c9e3722323849e5b526ad5d9226062c00d21909b'"
	Dec 17 19:35:16 functional-240388 dockerd[7924]: time="2025-12-17T19:35:16.200016889Z" level=info msg="ignoring event" container=bda7b0ce1a09acea20f60556583a189d78353284e5aa024fe014450268259e70 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 17 19:35:24 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:35:24Z" level=error msg="error getting RW layer size for container ID 'd78244b593944cd7e0e4f16b9b888f48bc7129b768d4e5e0bf62b079bec06dce': Error response from daemon: No such container: d78244b593944cd7e0e4f16b9b888f48bc7129b768d4e5e0bf62b079bec06dce"
	Dec 17 19:35:24 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:35:24Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd78244b593944cd7e0e4f16b9b888f48bc7129b768d4e5e0bf62b079bec06dce'"
	Dec 17 19:36:55 functional-240388 dockerd[7924]: time="2025-12-17T19:36:55.174600498Z" level=info msg="ignoring event" container=c389d6e9b8b25fe06ee328f3917cf450d84ad5662adb95ce9d71dd903fa1b18d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c389d6e9b8b25       73f80cdc073da       58 seconds ago      Exited              kube-scheduler            8                   c56608b9e4727       kube-scheduler-functional-240388            kube-system
	d3cd2a6021efd       af0321f3a4f38       4 minutes ago       Running             kube-proxy                4                   28fbe3560cc69       kube-proxy-9b4xt                            kube-system
	bbeee7ea3c766       6e38f40d628db       4 minutes ago       Running             storage-provisioner       3                   3c1a68a5a63f9       storage-provisioner                         kube-system
	720ef2e2e28ba       aa5e3ebc0dfed       4 minutes ago       Running             coredns                   3                   a51d46f16da86       coredns-7d764666f9-p2jc7                    kube-system
	060eef1436e87       5032a56602e1b       4 minutes ago       Running             kube-controller-manager   4                   2eee46eecde45       kube-controller-manager-functional-240388   kube-system
	7ed9aced06540       0a108f7189562       4 minutes ago       Running             etcd                      3                   b52517caf11d9       etcd-functional-240388                      kube-system
	5e0236591f856       58865405a13bc       4 minutes ago       Running             kube-apiserver            0                   99980b89e2f75       kube-apiserver-functional-240388            kube-system
	8512c4ee52340       af0321f3a4f38       4 minutes ago       Exited              kube-proxy                3                   cf40faa6d26bc       kube-proxy-9b4xt                            kube-system
	6a5939c9502db       5032a56602e1b       4 minutes ago       Exited              kube-controller-manager   3                   67f00add7f903       kube-controller-manager-functional-240388   kube-system
	2eee0e13328f5       aa5e3ebc0dfed       5 minutes ago       Exited              coredns                   2                   16980b72586cd       coredns-7d764666f9-p2jc7                    kube-system
	9b648a420d5f2       6e38f40d628db       5 minutes ago       Exited              storage-provisioner       2                   9fca1633c22ad       storage-provisioner                         kube-system
	bb348be6d197a       0a108f7189562       5 minutes ago       Exited              etcd                      2                   2a94d92ddfbf2       etcd-functional-240388                      kube-system
	
	
	==> coredns [2eee0e13328f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:38482 - 26281 "HINFO IN 7082336438510137172.6908441513580825570. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027199885s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [720ef2e2e28b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:60164 - 54627 "HINFO IN 8701000521322517761.9006979715444964387. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.118925994s
	
	
	==> describe nodes <==
	Name:               functional-240388
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-240388
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=functional-240388
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T19_31_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 19:31:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-240388
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 19:37:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 19:33:48 +0000   Wed, 17 Dec 2025 19:31:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 19:33:48 +0000   Wed, 17 Dec 2025 19:31:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 19:33:48 +0000   Wed, 17 Dec 2025 19:31:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 19:33:48 +0000   Wed, 17 Dec 2025 19:31:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    functional-240388
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca73804dacda4148bcecb3c8c2b68c32
	  System UUID:                ca73804d-acda-4148-bcec-b3c8c2b68c32
	  Boot ID:                    23b31f3c-6ff1-49c8-bae1-b3e21418a3ce
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.2
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-p2jc7                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     6m5s
	  kube-system                 etcd-functional-240388                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         6m12s
	  kube-system                 kube-apiserver-functional-240388             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-controller-manager-functional-240388    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 kube-proxy-9b4xt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m5s
	  kube-system                 kube-scheduler-functional-240388             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  6m6s  node-controller  Node functional-240388 event: Registered Node functional-240388 in Controller
	  Normal  RegisteredNode  5m8s  node-controller  Node functional-240388 event: Registered Node functional-240388 in Controller
	  Normal  RegisteredNode  4m1s  node-controller  Node functional-240388 event: Registered Node functional-240388 in Controller
	
	
	==> dmesg <==
	[  +0.280021] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.109752] kauditd_printk_skb: 345 callbacks suppressed
	[  +0.098255] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.156272] kauditd_printk_skb: 165 callbacks suppressed
	[  +0.603143] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.025033] kauditd_printk_skb: 219 callbacks suppressed
	[Dec17 19:32] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.510593] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.045349] kauditd_printk_skb: 56 callbacks suppressed
	[  +2.055200] kauditd_printk_skb: 400 callbacks suppressed
	[  +0.128571] kauditd_printk_skb: 107 callbacks suppressed
	[  +1.610657] kauditd_printk_skb: 150 callbacks suppressed
	[Dec17 19:33] kauditd_printk_skb: 2 callbacks suppressed
	[ +20.173189] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.324441] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.006613] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +5.161832] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.933223] kauditd_printk_skb: 410 callbacks suppressed
	[  +0.335071] kauditd_printk_skb: 228 callbacks suppressed
	[  +6.261034] kauditd_printk_skb: 2 callbacks suppressed
	[Dec17 19:34] kauditd_printk_skb: 34 callbacks suppressed
	[Dec17 19:35] kauditd_printk_skb: 6 callbacks suppressed
	[Dec17 19:36] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [7ed9aced0654] <==
	{"level":"info","ts":"2025-12-17T19:33:46.239077Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"eaed0234649c774e","local-member-id":"cde0bb267fc4e559","added-peer-id":"cde0bb267fc4e559","added-peer-peer-urls":["https://192.168.39.22:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-17T19:33:46.239215Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"eaed0234649c774e","local-member-id":"cde0bb267fc4e559","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-17T19:33:46.235081Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-17T19:33:46.235098Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.22:2380"}
	{"level":"info","ts":"2025-12-17T19:33:46.239296Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.22:2380"}
	{"level":"info","ts":"2025-12-17T19:33:46.245673Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cde0bb267fc4e559","initial-advertise-peer-urls":["https://192.168.39.22:2380"],"listen-peer-urls":["https://192.168.39.22:2380"],"advertise-client-urls":["https://192.168.39.22:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.22:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-17T19:33:46.245695Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-17T19:33:46.806159Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cde0bb267fc4e559 is starting a new election at term 4"}
	{"level":"info","ts":"2025-12-17T19:33:46.806211Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cde0bb267fc4e559 became pre-candidate at term 4"}
	{"level":"info","ts":"2025-12-17T19:33:46.806249Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cde0bb267fc4e559 received MsgPreVoteResp from cde0bb267fc4e559 at term 4"}
	{"level":"info","ts":"2025-12-17T19:33:46.806259Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cde0bb267fc4e559 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T19:33:46.806272Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cde0bb267fc4e559 became candidate at term 5"}
	{"level":"info","ts":"2025-12-17T19:33:46.808150Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cde0bb267fc4e559 received MsgVoteResp from cde0bb267fc4e559 at term 5"}
	{"level":"info","ts":"2025-12-17T19:33:46.808241Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cde0bb267fc4e559 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T19:33:46.808263Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cde0bb267fc4e559 became leader at term 5"}
	{"level":"info","ts":"2025-12-17T19:33:46.808271Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cde0bb267fc4e559 elected leader cde0bb267fc4e559 at term 5"}
	{"level":"info","ts":"2025-12-17T19:33:46.811093Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cde0bb267fc4e559","local-member-attributes":"{Name:functional-240388 ClientURLs:[https://192.168.39.22:2379]}","cluster-id":"eaed0234649c774e","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T19:33:46.811141Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T19:33:46.811333Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T19:33:46.812913Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T19:33:46.813647Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T19:33:46.814214Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T19:33:46.814326Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T19:33:46.815065Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-17T19:33:46.815938Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.22:2379"}
	
	
	==> etcd [bb348be6d197] <==
	{"level":"info","ts":"2025-12-17T19:32:39.703026Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T19:32:39.704425Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T19:32:39.704604Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T19:32:39.705307Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T19:32:39.706437Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T19:32:39.709945Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.22:2379"}
	{"level":"info","ts":"2025-12-17T19:32:39.711194Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-17T19:33:29.523469Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-17T19:33:29.523619Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-240388","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.22:2380"],"advertise-client-urls":["https://192.168.39.22:2379"]}
	{"level":"error","ts":"2025-12-17T19:33:29.524018Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T19:33:36.526356Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T19:33:36.535569Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T19:33:36.535641Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"cde0bb267fc4e559","current-leader-member-id":"cde0bb267fc4e559"}
	{"level":"info","ts":"2025-12-17T19:33:36.535804Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-17T19:33:36.535839Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-17T19:33:36.536005Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T19:33:36.536165Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T19:33:36.536303Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-17T19:33:36.536584Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.22:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T19:33:36.536670Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.22:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T19:33:36.536756Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.22:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T19:33:36.540108Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.22:2380"}
	{"level":"error","ts":"2025-12-17T19:33:36.540158Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.22:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T19:33:36.540212Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.22:2380"}
	{"level":"info","ts":"2025-12-17T19:33:36.540220Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-240388","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.22:2380"],"advertise-client-urls":["https://192.168.39.22:2379"]}
	
	
	==> kernel <==
	 19:37:52 up 6 min,  0 users,  load average: 0.23, 0.54, 0.32
	Linux functional-240388 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [5e0236591f85] <==
	I1217 19:33:48.193986       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:48.195040       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 19:33:48.195204       1 aggregator.go:187] initial CRD sync complete...
	I1217 19:33:48.195297       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 19:33:48.195312       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 19:33:48.195348       1 cache.go:39] Caches are synced for autoregister controller
	I1217 19:33:48.196861       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 19:33:48.196978       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 19:33:48.199599       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 19:33:48.199859       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 19:33:48.199994       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 19:33:48.203065       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:48.203116       1 policy_source.go:248] refreshing policies
	E1217 19:33:48.203778       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 19:33:48.205324       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 19:33:48.285472       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 19:33:48.708639       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 19:33:49.014823       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 19:33:50.032084       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 19:33:50.096678       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 19:33:50.147339       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 19:33:50.159097       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 19:33:51.583557       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 19:33:51.633623       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 19:33:51.785119       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [060eef1436e8] <==
	I1217 19:33:51.312115       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.312158       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.312172       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.312197       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.312243       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.317055       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.317108       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.317196       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.317219       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.317261       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.320834       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.326367       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.327452       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.327578       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.327778       1 range_allocator.go:177] "Sending events to api server"
	I1217 19:33:51.327884       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1217 19:33:51.328405       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 19:33:51.328621       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.329112       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.344055       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 19:33:51.367478       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.413161       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.413193       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 19:33:51.413198       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 19:33:51.444447       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-controller-manager [6a5939c9502d] <==
	I1217 19:33:42.779068       1 serving.go:386] Generated self-signed cert in-memory
	I1217 19:33:42.803815       1 controllermanager.go:189] "Starting" version="v1.35.0-rc.1"
	I1217 19:33:42.805416       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 19:33:42.809073       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1217 19:33:42.809222       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1217 19:33:42.809956       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 19:33:42.811439       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-proxy [8512c4ee5234] <==
	I1217 19:33:42.844139       1 server_linux.go:53] "Using iptables proxy"
	I1217 19:33:42.908637       1 shared_informer.go:370] "Waiting for caches to sync"
	
	
	==> kube-proxy [d3cd2a6021ef] <==
	I1217 19:33:50.115201       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 19:33:50.219503       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:50.219553       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.22"]
	E1217 19:33:50.219660       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 19:33:50.358153       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 19:33:50.358275       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 19:33:50.358324       1 server_linux.go:136] "Using iptables Proxier"
	I1217 19:33:50.370513       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 19:33:50.370948       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1217 19:33:50.371342       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 19:33:50.382699       1 config.go:200] "Starting service config controller"
	I1217 19:33:50.382951       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 19:33:50.382986       1 config.go:106] "Starting endpoint slice config controller"
	I1217 19:33:50.383135       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 19:33:50.383323       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 19:33:50.383345       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 19:33:50.398739       1 config.go:309] "Starting node config controller"
	I1217 19:33:50.398767       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 19:33:50.398774       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 19:33:50.484896       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 19:33:50.484936       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 19:33:50.484967       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c389d6e9b8b2] <==
	I1217 19:36:55.152302       1 serving.go:386] Generated self-signed cert in-memory
	E1217 19:36:55.157564       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	
	
	==> kubelet <==
	Dec 17 19:36:29 functional-240388 kubelet[10024]: E1217 19:36:29.722751   10024 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-240388" containerName="etcd"
	Dec 17 19:36:30 functional-240388 kubelet[10024]: E1217 19:36:30.722813   10024 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-240388" containerName="kube-controller-manager"
	Dec 17 19:36:33 functional-240388 kubelet[10024]: E1217 19:36:33.723188   10024 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-240388" containerName="kube-apiserver"
	Dec 17 19:36:44 functional-240388 kubelet[10024]: E1217 19:36:44.723669   10024 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-p2jc7" containerName="coredns"
	Dec 17 19:36:54 functional-240388 kubelet[10024]: E1217 19:36:54.722966   10024 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-240388" containerName="kube-scheduler"
	Dec 17 19:36:54 functional-240388 kubelet[10024]: I1217 19:36:54.723058   10024 scope.go:122] "RemoveContainer" containerID="bda7b0ce1a09acea20f60556583a189d78353284e5aa024fe014450268259e70"
	Dec 17 19:36:55 functional-240388 kubelet[10024]: E1217 19:36:55.227238   10024 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-240388" containerName="kube-scheduler"
	Dec 17 19:36:55 functional-240388 kubelet[10024]: I1217 19:36:55.227265   10024 scope.go:122] "RemoveContainer" containerID="c389d6e9b8b25fe06ee328f3917cf450d84ad5662adb95ce9d71dd903fa1b18d"
	Dec 17 19:36:55 functional-240388 kubelet[10024]: E1217 19:36:55.227456   10024 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-functional-240388_kube-system(a7752d693a1b0ca5f7f99c49d4c4d9a3)\"" pod="kube-system/kube-scheduler-functional-240388" podUID="a7752d693a1b0ca5f7f99c49d4c4d9a3"
	Dec 17 19:36:56 functional-240388 kubelet[10024]: I1217 19:36:56.250990   10024 scope.go:122] "RemoveContainer" containerID="bda7b0ce1a09acea20f60556583a189d78353284e5aa024fe014450268259e70"
	Dec 17 19:36:56 functional-240388 kubelet[10024]: E1217 19:36:56.251196   10024 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-240388" containerName="kube-scheduler"
	Dec 17 19:36:56 functional-240388 kubelet[10024]: I1217 19:36:56.251218   10024 scope.go:122] "RemoveContainer" containerID="c389d6e9b8b25fe06ee328f3917cf450d84ad5662adb95ce9d71dd903fa1b18d"
	Dec 17 19:36:56 functional-240388 kubelet[10024]: E1217 19:36:56.252554   10024 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-functional-240388_kube-system(a7752d693a1b0ca5f7f99c49d4c4d9a3)\"" pod="kube-system/kube-scheduler-functional-240388" podUID="a7752d693a1b0ca5f7f99c49d4c4d9a3"
	Dec 17 19:36:57 functional-240388 kubelet[10024]: E1217 19:36:57.266982   10024 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-240388" containerName="kube-scheduler"
	Dec 17 19:36:57 functional-240388 kubelet[10024]: I1217 19:36:57.267020   10024 scope.go:122] "RemoveContainer" containerID="c389d6e9b8b25fe06ee328f3917cf450d84ad5662adb95ce9d71dd903fa1b18d"
	Dec 17 19:36:57 functional-240388 kubelet[10024]: E1217 19:36:57.267174   10024 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-functional-240388_kube-system(a7752d693a1b0ca5f7f99c49d4c4d9a3)\"" pod="kube-system/kube-scheduler-functional-240388" podUID="a7752d693a1b0ca5f7f99c49d4c4d9a3"
	Dec 17 19:36:59 functional-240388 kubelet[10024]: E1217 19:36:59.912508   10024 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-240388" containerName="kube-scheduler"
	Dec 17 19:36:59 functional-240388 kubelet[10024]: I1217 19:36:59.912547   10024 scope.go:122] "RemoveContainer" containerID="c389d6e9b8b25fe06ee328f3917cf450d84ad5662adb95ce9d71dd903fa1b18d"
	Dec 17 19:36:59 functional-240388 kubelet[10024]: E1217 19:36:59.912725   10024 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-functional-240388_kube-system(a7752d693a1b0ca5f7f99c49d4c4d9a3)\"" pod="kube-system/kube-scheduler-functional-240388" podUID="a7752d693a1b0ca5f7f99c49d4c4d9a3"
	Dec 17 19:37:04 functional-240388 kubelet[10024]: E1217 19:37:04.157154   10024 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-240388" containerName="kube-scheduler"
	Dec 17 19:37:04 functional-240388 kubelet[10024]: I1217 19:37:04.157203   10024 scope.go:122] "RemoveContainer" containerID="c389d6e9b8b25fe06ee328f3917cf450d84ad5662adb95ce9d71dd903fa1b18d"
	Dec 17 19:37:04 functional-240388 kubelet[10024]: E1217 19:37:04.157445   10024 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-functional-240388_kube-system(a7752d693a1b0ca5f7f99c49d4c4d9a3)\"" pod="kube-system/kube-scheduler-functional-240388" podUID="a7752d693a1b0ca5f7f99c49d4c4d9a3"
	Dec 17 19:37:34 functional-240388 kubelet[10024]: E1217 19:37:34.724113   10024 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-240388" containerName="kube-apiserver"
	Dec 17 19:37:38 functional-240388 kubelet[10024]: E1217 19:37:38.723737   10024 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-240388" containerName="kube-controller-manager"
	Dec 17 19:37:50 functional-240388 kubelet[10024]: E1217 19:37:50.723446   10024 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-240388" containerName="etcd"
	
	
	==> storage-provisioner [9b648a420d5f] <==
	W1217 19:33:03.593135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:05.598686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:05.604262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:07.608150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:07.613505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:09.616685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:09.625987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:11.628971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:11.634196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:13.638095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:13.650817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:15.653822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:15.659165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:17.662761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:17.671275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:19.675136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:19.680125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:21.685015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:21.693876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:23.697502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:23.702567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:25.706350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:25.714991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:27.719834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:27.724945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [bbeee7ea3c76] <==
	W1217 19:37:28.460304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:30.464230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:30.469876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:32.472633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:32.481327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:34.484750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:34.490191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:36.493922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:36.502058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:38.505748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:38.510948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:40.514429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:40.522307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:42.531030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:42.538013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:44.541135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:44.550203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:46.554820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:46.561173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:48.564580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:48.569437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:50.572296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:50.581205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:52.586052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:52.592816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-240388 -n functional-240388
helpers_test.go:270: (dbg) Run:  kubectl --context functional-240388 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (286.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (1.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-240388 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:848: kube-scheduler is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:True} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.39.22 PodIP:192.168.39.22 StartTime:2025-12-17 19:33:44 +0000 UTC ContainerStatuses:[{Name:kube-scheduler State:{Waiting:0xc000b5e5a0 Running:<nil> Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc00042a380} Ready:false RestartCount:8 Image:registry.k8s.io/kube-scheduler:v1.35.0-rc.1 ImageID:docker-pullable://registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3 ContainerID:docker://c389d6e9b8b25fe06ee328f3917cf450d84ad5662adb95ce9d71dd903fa1b18d}]}
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-240388 -n functional-240388
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 logs -n 25
E1217 19:37:54.272869  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-750489 image ls --format table --alsologtostderr                                                          │ functional-750489 │ jenkins │ v1.37.0 │ 17 Dec 25 19:30 UTC │ 17 Dec 25 19:30 UTC │
	│ service │ functional-750489 service list                                                                                       │ functional-750489 │ jenkins │ v1.37.0 │ 17 Dec 25 19:30 UTC │ 17 Dec 25 19:30 UTC │
	│ service │ functional-750489 service list -o json                                                                               │ functional-750489 │ jenkins │ v1.37.0 │ 17 Dec 25 19:30 UTC │ 17 Dec 25 19:31 UTC │
	│ service │ functional-750489 service --namespace=default --https --url hello-node                                               │ functional-750489 │ jenkins │ v1.37.0 │ 17 Dec 25 19:31 UTC │ 17 Dec 25 19:31 UTC │
	│ service │ functional-750489 service hello-node --url --format={{.IP}}                                                          │ functional-750489 │ jenkins │ v1.37.0 │ 17 Dec 25 19:31 UTC │ 17 Dec 25 19:31 UTC │
	│ service │ functional-750489 service hello-node --url                                                                           │ functional-750489 │ jenkins │ v1.37.0 │ 17 Dec 25 19:31 UTC │ 17 Dec 25 19:31 UTC │
	│ delete  │ -p functional-750489                                                                                                 │ functional-750489 │ jenkins │ v1.37.0 │ 17 Dec 25 19:31 UTC │ 17 Dec 25 19:31 UTC │
	│ start   │ -p functional-240388 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --kubernetes-version=v1.35.0-rc.1 │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:31 UTC │ 17 Dec 25 19:31 UTC │
	│ start   │ -p functional-240388 --alsologtostderr -v=8                                                                          │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:31 UTC │ 17 Dec 25 19:32 UTC │
	│ cache   │ functional-240388 cache add registry.k8s.io/pause:3.1                                                                │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:32 UTC │ 17 Dec 25 19:32 UTC │
	│ cache   │ functional-240388 cache add registry.k8s.io/pause:3.3                                                                │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:32 UTC │ 17 Dec 25 19:33 UTC │
	│ cache   │ functional-240388 cache add registry.k8s.io/pause:latest                                                             │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │ 17 Dec 25 19:33 UTC │
	│ cache   │ functional-240388 cache add minikube-local-cache-test:functional-240388                                              │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │ 17 Dec 25 19:33 UTC │
	│ cache   │ functional-240388 cache delete minikube-local-cache-test:functional-240388                                           │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │ 17 Dec 25 19:33 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │ 17 Dec 25 19:33 UTC │
	│ cache   │ list                                                                                                                 │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │ 17 Dec 25 19:33 UTC │
	│ ssh     │ functional-240388 ssh sudo crictl images                                                                             │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │ 17 Dec 25 19:33 UTC │
	│ ssh     │ functional-240388 ssh sudo docker rmi registry.k8s.io/pause:latest                                                   │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │ 17 Dec 25 19:33 UTC │
	│ ssh     │ functional-240388 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                              │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │                     │
	│ cache   │ functional-240388 cache reload                                                                                       │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │ 17 Dec 25 19:33 UTC │
	│ ssh     │ functional-240388 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                              │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │ 17 Dec 25 19:33 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │ 17 Dec 25 19:33 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │ 17 Dec 25 19:33 UTC │
	│ kubectl │ functional-240388 kubectl -- --context functional-240388 get pods                                                    │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │ 17 Dec 25 19:33 UTC │
	│ start   │ -p functional-240388 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all             │ functional-240388 │ jenkins │ v1.37.0 │ 17 Dec 25 19:33 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 19:33:06
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 19:33:06.203503  267684 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:33:06.203782  267684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:33:06.203786  267684 out.go:374] Setting ErrFile to fd 2...
	I1217 19:33:06.203789  267684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:33:06.204003  267684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
	I1217 19:33:06.204492  267684 out.go:368] Setting JSON to false
	I1217 19:33:06.205478  267684 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4530,"bootTime":1765995456,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:33:06.205528  267684 start.go:143] virtualization: kvm guest
	I1217 19:33:06.207441  267684 out.go:179] * [functional-240388] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 19:33:06.208646  267684 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 19:33:06.208699  267684 notify.go:221] Checking for updates...
	I1217 19:33:06.211236  267684 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:33:06.212698  267684 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-255930/kubeconfig
	I1217 19:33:06.213817  267684 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-255930/.minikube
	I1217 19:33:06.215252  267684 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 19:33:06.216551  267684 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 19:33:06.218219  267684 config.go:182] Loaded profile config "functional-240388": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1217 19:33:06.218312  267684 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:33:06.251540  267684 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 19:33:06.252759  267684 start.go:309] selected driver: kvm2
	I1217 19:33:06.252769  267684 start.go:927] validating driver "kvm2" against &{Name:functional-240388 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-240388 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:33:06.252872  267684 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 19:33:06.253839  267684 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 19:33:06.253869  267684 cni.go:84] Creating CNI manager for ""
	I1217 19:33:06.253927  267684 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 19:33:06.253968  267684 start.go:353] cluster config:
	{Name:functional-240388 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-240388 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:33:06.254054  267684 iso.go:125] acquiring lock: {Name:mkeac5b890dbb93d0e36dd357fe6f0cc980f247e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:33:06.256031  267684 out.go:179] * Starting "functional-240388" primary control-plane node in "functional-240388" cluster
	I1217 19:33:06.257078  267684 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1217 19:33:06.257104  267684 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-255930/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1217 19:33:06.257110  267684 cache.go:65] Caching tarball of preloaded images
	I1217 19:33:06.257199  267684 preload.go:238] Found /home/jenkins/minikube-integration/22186-255930/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 19:33:06.257207  267684 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1217 19:33:06.257315  267684 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/config.json ...
	I1217 19:33:06.257529  267684 start.go:360] acquireMachinesLock for functional-240388: {Name:mkc3bc9f6c99eb74eb5c5fedf7f00499ebad23f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 19:33:06.257571  267684 start.go:364] duration metric: took 27.823µs to acquireMachinesLock for "functional-240388"
	I1217 19:33:06.257581  267684 start.go:96] Skipping create...Using existing machine configuration
	I1217 19:33:06.257585  267684 fix.go:54] fixHost starting: 
	I1217 19:33:06.259464  267684 fix.go:112] recreateIfNeeded on functional-240388: state=Running err=<nil>
	W1217 19:33:06.259480  267684 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 19:33:06.261165  267684 out.go:252] * Updating the running kvm2 "functional-240388" VM ...
	I1217 19:33:06.261187  267684 machine.go:94] provisionDockerMachine start ...
	I1217 19:33:06.263928  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.264385  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:06.264410  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.264635  267684 main.go:143] libmachine: Using SSH client type: native
	I1217 19:33:06.264883  267684 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1217 19:33:06.264889  267684 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 19:33:06.378717  267684 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-240388
	
	I1217 19:33:06.378738  267684 buildroot.go:166] provisioning hostname "functional-240388"
	I1217 19:33:06.382239  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.382773  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:06.382796  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.383019  267684 main.go:143] libmachine: Using SSH client type: native
	I1217 19:33:06.383275  267684 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1217 19:33:06.383283  267684 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-240388 && echo "functional-240388" | sudo tee /etc/hostname
	I1217 19:33:06.513472  267684 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-240388
	
	I1217 19:33:06.516442  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.516888  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:06.516905  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.517149  267684 main.go:143] libmachine: Using SSH client type: native
	I1217 19:33:06.517343  267684 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1217 19:33:06.517355  267684 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-240388' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-240388/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-240388' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 19:33:06.629940  267684 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 19:33:06.629963  267684 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22186-255930/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-255930/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-255930/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-255930/.minikube}
	I1217 19:33:06.630013  267684 buildroot.go:174] setting up certificates
	I1217 19:33:06.630022  267684 provision.go:84] configureAuth start
	I1217 19:33:06.632827  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.633218  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:06.633234  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.635673  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.636028  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:06.636051  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.636175  267684 provision.go:143] copyHostCerts
	I1217 19:33:06.636228  267684 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-255930/.minikube/ca.pem, removing ...
	I1217 19:33:06.636238  267684 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-255930/.minikube/ca.pem
	I1217 19:33:06.636309  267684 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-255930/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-255930/.minikube/ca.pem (1082 bytes)
	I1217 19:33:06.636400  267684 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-255930/.minikube/cert.pem, removing ...
	I1217 19:33:06.636404  267684 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-255930/.minikube/cert.pem
	I1217 19:33:06.636429  267684 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-255930/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-255930/.minikube/cert.pem (1123 bytes)
	I1217 19:33:06.636482  267684 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-255930/.minikube/key.pem, removing ...
	I1217 19:33:06.636485  267684 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-255930/.minikube/key.pem
	I1217 19:33:06.636506  267684 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-255930/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-255930/.minikube/key.pem (1675 bytes)
	I1217 19:33:06.636551  267684 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-255930/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-255930/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-255930/.minikube/certs/ca-key.pem org=jenkins.functional-240388 san=[127.0.0.1 192.168.39.22 functional-240388 localhost minikube]
	I1217 19:33:06.786573  267684 provision.go:177] copyRemoteCerts
	I1217 19:33:06.786659  267684 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 19:33:06.789975  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.790330  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:06.790345  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.790477  267684 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/functional-240388/id_rsa Username:docker}
	I1217 19:33:06.879675  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 19:33:06.911239  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 19:33:06.942304  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 19:33:06.972692  267684 provision.go:87] duration metric: took 342.657497ms to configureAuth
	I1217 19:33:06.972712  267684 buildroot.go:189] setting minikube options for container-runtime
	I1217 19:33:06.972901  267684 config.go:182] Loaded profile config "functional-240388": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1217 19:33:06.975759  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.976128  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:06.976144  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:06.976309  267684 main.go:143] libmachine: Using SSH client type: native
	I1217 19:33:06.976500  267684 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1217 19:33:06.976505  267684 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 19:33:07.088739  267684 main.go:143] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1217 19:33:07.088756  267684 buildroot.go:70] root file system type: tmpfs
	I1217 19:33:07.088852  267684 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 19:33:07.092317  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.092778  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:07.092795  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.092955  267684 main.go:143] libmachine: Using SSH client type: native
	I1217 19:33:07.093202  267684 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1217 19:33:07.093245  267684 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 19:33:07.228187  267684 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 19:33:07.231148  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.231515  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:07.231529  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.231733  267684 main.go:143] libmachine: Using SSH client type: native
	I1217 19:33:07.231924  267684 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1217 19:33:07.231933  267684 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 19:33:07.348208  267684 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 19:33:07.348223  267684 machine.go:97] duration metric: took 1.087029537s to provisionDockerMachine
	I1217 19:33:07.348235  267684 start.go:293] postStartSetup for "functional-240388" (driver="kvm2")
	I1217 19:33:07.348246  267684 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 19:33:07.348303  267684 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 19:33:07.351188  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.351680  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:07.351698  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.351844  267684 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/functional-240388/id_rsa Username:docker}
	I1217 19:33:07.437905  267684 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 19:33:07.443173  267684 info.go:137] Remote host: Buildroot 2025.02
	I1217 19:33:07.443192  267684 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-255930/.minikube/addons for local assets ...
	I1217 19:33:07.443261  267684 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-255930/.minikube/files for local assets ...
	I1217 19:33:07.443368  267684 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-255930/.minikube/files/etc/ssl/certs/2599852.pem -> 2599852.pem in /etc/ssl/certs
	I1217 19:33:07.443455  267684 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-255930/.minikube/files/etc/test/nested/copy/259985/hosts -> hosts in /etc/test/nested/copy/259985
	I1217 19:33:07.443494  267684 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/259985
	I1217 19:33:07.456195  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/files/etc/ssl/certs/2599852.pem --> /etc/ssl/certs/2599852.pem (1708 bytes)
	I1217 19:33:07.487969  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/files/etc/test/nested/copy/259985/hosts --> /etc/test/nested/copy/259985/hosts (40 bytes)
	I1217 19:33:07.519505  267684 start.go:296] duration metric: took 171.253835ms for postStartSetup
	I1217 19:33:07.519546  267684 fix.go:56] duration metric: took 1.261959532s for fixHost
	I1217 19:33:07.522654  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.523039  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:07.523063  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.523261  267684 main.go:143] libmachine: Using SSH client type: native
	I1217 19:33:07.523466  267684 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1217 19:33:07.523470  267684 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 19:33:07.636250  267684 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765999987.632110773
	
	I1217 19:33:07.636266  267684 fix.go:216] guest clock: 1765999987.632110773
	I1217 19:33:07.636274  267684 fix.go:229] Guest: 2025-12-17 19:33:07.632110773 +0000 UTC Remote: 2025-12-17 19:33:07.519549795 +0000 UTC m=+1.366822896 (delta=112.560978ms)
	I1217 19:33:07.636297  267684 fix.go:200] guest clock delta is within tolerance: 112.560978ms
	I1217 19:33:07.636302  267684 start.go:83] releasing machines lock for "functional-240388", held for 1.378724961s
	I1217 19:33:07.639671  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.640215  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:07.640235  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.640830  267684 ssh_runner.go:195] Run: cat /version.json
	I1217 19:33:07.640915  267684 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 19:33:07.643978  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.644315  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:07.644329  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.644334  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.644500  267684 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/functional-240388/id_rsa Username:docker}
	I1217 19:33:07.644911  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:07.644934  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:07.645119  267684 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/functional-240388/id_rsa Username:docker}
	I1217 19:33:07.727454  267684 ssh_runner.go:195] Run: systemctl --version
	I1217 19:33:07.761328  267684 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 19:33:07.768413  267684 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 19:33:07.768480  267684 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 19:33:07.781436  267684 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 19:33:07.781462  267684 start.go:496] detecting cgroup driver to use...
	I1217 19:33:07.781587  267684 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 19:33:07.808240  267684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 19:33:07.822696  267684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 19:33:07.836690  267684 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 19:33:07.836752  267684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 19:33:07.850854  267684 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 19:33:07.865319  267684 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 19:33:07.881786  267684 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 19:33:07.896674  267684 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 19:33:07.913883  267684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 19:33:07.928739  267684 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 19:33:07.943427  267684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 19:33:07.958124  267684 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 19:33:07.969975  267684 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 19:33:07.983204  267684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:33:08.188650  267684 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 19:33:08.241722  267684 start.go:496] detecting cgroup driver to use...
	I1217 19:33:08.241799  267684 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 19:33:08.261139  267684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 19:33:08.279259  267684 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 19:33:08.309361  267684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 19:33:08.326823  267684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 19:33:08.343826  267684 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 19:33:08.368675  267684 ssh_runner.go:195] Run: which cri-dockerd
	I1217 19:33:08.373233  267684 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 19:33:08.386017  267684 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 19:33:08.407873  267684 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 19:33:08.615617  267684 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 19:33:08.826661  267684 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 19:33:08.826828  267684 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 19:33:08.852952  267684 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 19:33:08.869029  267684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:33:09.065134  267684 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 19:33:40.054438  267684 ssh_runner.go:235] Completed: sudo systemctl restart docker: (30.989264208s)
	I1217 19:33:40.054535  267684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 19:33:40.092868  267684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 19:33:40.125996  267684 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1217 19:33:40.170501  267684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 19:33:40.189408  267684 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 19:33:40.345265  267684 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 19:33:40.504425  267684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:33:40.661370  267684 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 19:33:40.704620  267684 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 19:33:40.720078  267684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:33:40.910684  267684 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 19:33:41.031296  267684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 19:33:41.051233  267684 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 19:33:41.051302  267684 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 19:33:41.057808  267684 start.go:564] Will wait 60s for crictl version
	I1217 19:33:41.057880  267684 ssh_runner.go:195] Run: which crictl
	I1217 19:33:41.062048  267684 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 19:33:41.095492  267684 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.2
	RuntimeApiVersion:  v1
	I1217 19:33:41.095556  267684 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 19:33:41.122830  267684 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 19:33:41.151134  267684 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 28.5.2 ...
	I1217 19:33:41.154049  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:41.154487  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:41.154506  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:41.154685  267684 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1217 19:33:41.161212  267684 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 19:33:41.162920  267684 kubeadm.go:884] updating cluster {Name:functional-240388 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0-rc.1 ClusterName:functional-240388 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
tString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 19:33:41.163089  267684 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1217 19:33:41.163139  267684 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 19:33:41.191761  267684 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-240388
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1217 19:33:41.191775  267684 docker.go:621] Images already preloaded, skipping extraction
	I1217 19:33:41.191834  267684 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 19:33:41.239915  267684 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-240388
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1217 19:33:41.239930  267684 cache_images.go:86] Images are preloaded, skipping loading
	I1217 19:33:41.239939  267684 kubeadm.go:935] updating node { 192.168.39.22 8441 v1.35.0-rc.1 docker true true} ...
	I1217 19:33:41.240072  267684 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-240388 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-240388 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 19:33:41.240179  267684 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 19:33:41.426914  267684 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 19:33:41.426938  267684 cni.go:84] Creating CNI manager for ""
	I1217 19:33:41.426957  267684 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 19:33:41.426971  267684 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 19:33:41.426995  267684 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.22 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-240388 NodeName:functional-240388 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletCon
figOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 19:33:41.427126  267684 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.22
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-240388"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.22"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.22"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 19:33:41.427217  267684 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 19:33:41.453808  267684 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 19:33:41.453878  267684 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 19:33:41.474935  267684 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1217 19:33:41.530294  267684 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 19:33:41.601303  267684 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I1217 19:33:41.724558  267684 ssh_runner.go:195] Run: grep 192.168.39.22	control-plane.minikube.internal$ /etc/hosts
	I1217 19:33:41.735088  267684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:33:42.073369  267684 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 19:33:42.122754  267684 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388 for IP: 192.168.39.22
	I1217 19:33:42.122769  267684 certs.go:195] generating shared ca certs ...
	I1217 19:33:42.122787  267684 certs.go:227] acquiring lock for ca certs: {Name:mk41d44cf7495c219db6c5af86332dabe9b164c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:33:42.122952  267684 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-255930/.minikube/ca.key
	I1217 19:33:42.122986  267684 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-255930/.minikube/proxy-client-ca.key
	I1217 19:33:42.122993  267684 certs.go:257] generating profile certs ...
	I1217 19:33:42.123066  267684 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.key
	I1217 19:33:42.123140  267684 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/apiserver.key.69fe0bcf
	I1217 19:33:42.123174  267684 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/proxy-client.key
	I1217 19:33:42.123282  267684 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-255930/.minikube/certs/259985.pem (1338 bytes)
	W1217 19:33:42.123309  267684 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-255930/.minikube/certs/259985_empty.pem, impossibly tiny 0 bytes
	I1217 19:33:42.123314  267684 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-255930/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 19:33:42.123336  267684 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-255930/.minikube/certs/ca.pem (1082 bytes)
	I1217 19:33:42.123355  267684 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-255930/.minikube/certs/cert.pem (1123 bytes)
	I1217 19:33:42.123374  267684 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-255930/.minikube/certs/key.pem (1675 bytes)
	I1217 19:33:42.123410  267684 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-255930/.minikube/files/etc/ssl/certs/2599852.pem (1708 bytes)
	I1217 19:33:42.123979  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 19:33:42.258305  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 19:33:42.322504  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 19:33:42.483405  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 19:33:42.625002  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 19:33:42.692761  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 19:33:42.745086  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 19:33:42.793780  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 19:33:42.841690  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/files/etc/ssl/certs/2599852.pem --> /usr/share/ca-certificates/2599852.pem (1708 bytes)
	I1217 19:33:42.891346  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 19:33:42.933420  267684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-255930/.minikube/certs/259985.pem --> /usr/share/ca-certificates/259985.pem (1338 bytes)
	I1217 19:33:42.962368  267684 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 19:33:42.984665  267684 ssh_runner.go:195] Run: openssl version
	I1217 19:33:42.992262  267684 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/259985.pem
	I1217 19:33:43.005810  267684 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/259985.pem /etc/ssl/certs/259985.pem
	I1217 19:33:43.028391  267684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259985.pem
	I1217 19:33:43.034079  267684 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:31 /usr/share/ca-certificates/259985.pem
	I1217 19:33:43.034138  267684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259985.pem
	I1217 19:33:43.041611  267684 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 19:33:43.053628  267684 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2599852.pem
	I1217 19:33:43.065540  267684 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2599852.pem /etc/ssl/certs/2599852.pem
	I1217 19:33:43.076857  267684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2599852.pem
	I1217 19:33:43.082352  267684 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:31 /usr/share/ca-certificates/2599852.pem
	I1217 19:33:43.082399  267684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2599852.pem
	I1217 19:33:43.089508  267684 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 19:33:43.101894  267684 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:33:43.113900  267684 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 19:33:43.126463  267684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:33:43.131738  267684 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:20 /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:33:43.131806  267684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:33:43.139221  267684 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 19:33:43.151081  267684 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 19:33:43.156496  267684 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 19:33:43.163542  267684 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 19:33:43.171156  267684 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 19:33:43.179401  267684 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 19:33:43.187058  267684 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 19:33:43.194355  267684 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 19:33:43.201350  267684 kubeadm.go:401] StartCluster: {Name:functional-240388 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0-rc.1 ClusterName:functional-240388 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountSt
ring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:33:43.201471  267684 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 19:33:43.218924  267684 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 19:33:43.230913  267684 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 19:33:43.230923  267684 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 19:33:43.230973  267684 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 19:33:43.242368  267684 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 19:33:43.242866  267684 kubeconfig.go:125] found "functional-240388" server: "https://192.168.39.22:8441"
	I1217 19:33:43.243982  267684 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 19:33:43.254672  267684 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.39.22"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1217 19:33:43.254681  267684 kubeadm.go:1161] stopping kube-system containers ...
	I1217 19:33:43.254745  267684 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 19:33:43.275954  267684 docker.go:484] Stopping containers: [a17ea1c89782 8512c4ee5234 6a5939c9502d 1acc72667ffe 9cac3f35da9f 719551455fba 192352b42712 67f00add7f90 cf40faa6d26b 2eee0e13328f 16980b72586c ca0a9d83ba38 9b648a420d5f 4af0a786a194 287f41e5445c bb348be6d197 148616d57564 2a94d92ddfbf a7ce08614779 7a5020f70312 9fca1633c22a 54deab0a7e37 bfcb4221d4a7 7b61216a620a 5179f7af9585 70c515d79623 de691c17fad0 c9f3d097d04d bc39802d6918 1c77d49437e1 4953e0b7245e]
	I1217 19:33:43.276067  267684 ssh_runner.go:195] Run: docker stop a17ea1c89782 8512c4ee5234 6a5939c9502d 1acc72667ffe 9cac3f35da9f 719551455fba 192352b42712 67f00add7f90 cf40faa6d26b 2eee0e13328f 16980b72586c ca0a9d83ba38 9b648a420d5f 4af0a786a194 287f41e5445c bb348be6d197 148616d57564 2a94d92ddfbf a7ce08614779 7a5020f70312 9fca1633c22a 54deab0a7e37 bfcb4221d4a7 7b61216a620a 5179f7af9585 70c515d79623 de691c17fad0 c9f3d097d04d bc39802d6918 1c77d49437e1 4953e0b7245e
	I1217 19:33:43.624025  267684 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 19:33:43.676356  267684 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 19:33:43.689348  267684 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 17 19:31 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5637 Dec 17 19:32 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5677 Dec 17 19:32 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5585 Dec 17 19:32 /etc/kubernetes/scheduler.conf
	
	I1217 19:33:43.689427  267684 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 19:33:43.700899  267684 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 19:33:43.712072  267684 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 19:33:43.712160  267684 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 19:33:43.724140  267684 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 19:33:43.735067  267684 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 19:33:43.735130  267684 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 19:33:43.746444  267684 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 19:33:43.757136  267684 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 19:33:43.757188  267684 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 19:33:43.768689  267684 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 19:33:43.780126  267684 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 19:33:43.830635  267684 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 19:33:44.318201  267684 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 19:33:44.586483  267684 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 19:33:44.644239  267684 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 19:33:44.723984  267684 api_server.go:52] waiting for apiserver process to appear ...
	I1217 19:33:44.724052  267684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:33:45.225085  267684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:33:45.725212  267684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:33:46.225040  267684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:33:46.263588  267684 api_server.go:72] duration metric: took 1.539620704s to wait for apiserver process to appear ...
	I1217 19:33:46.263624  267684 api_server.go:88] waiting for apiserver healthz status ...
	I1217 19:33:46.263642  267684 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8441/healthz ...
	I1217 19:33:48.060178  267684 api_server.go:279] https://192.168.39.22:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 19:33:48.060199  267684 api_server.go:103] status: https://192.168.39.22:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 19:33:48.060212  267684 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8441/healthz ...
	I1217 19:33:48.082825  267684 api_server.go:279] https://192.168.39.22:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 19:33:48.082853  267684 api_server.go:103] status: https://192.168.39.22:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 19:33:48.264243  267684 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8441/healthz ...
	I1217 19:33:48.270123  267684 api_server.go:279] https://192.168.39.22:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 19:33:48.270140  267684 api_server.go:103] status: https://192.168.39.22:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 19:33:48.763703  267684 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8441/healthz ...
	I1217 19:33:48.771293  267684 api_server.go:279] https://192.168.39.22:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 19:33:48.771313  267684 api_server.go:103] status: https://192.168.39.22:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 19:33:49.263841  267684 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8441/healthz ...
	I1217 19:33:49.285040  267684 api_server.go:279] https://192.168.39.22:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 19:33:49.285061  267684 api_server.go:103] status: https://192.168.39.22:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 19:33:49.764778  267684 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8441/healthz ...
	I1217 19:33:49.771574  267684 api_server.go:279] https://192.168.39.22:8441/healthz returned 200:
	ok
	I1217 19:33:49.779794  267684 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 19:33:49.779830  267684 api_server.go:131] duration metric: took 3.516200098s to wait for apiserver health ...
	I1217 19:33:49.779839  267684 cni.go:84] Creating CNI manager for ""
	I1217 19:33:49.779849  267684 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 19:33:49.781831  267684 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1217 19:33:49.783461  267684 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1217 19:33:49.809372  267684 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1217 19:33:49.861811  267684 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 19:33:49.866819  267684 system_pods.go:59] 7 kube-system pods found
	I1217 19:33:49.866868  267684 system_pods.go:61] "coredns-7d764666f9-p2jc7" [463dfe4a-5f2b-4d8b-969f-3288b215bcba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:33:49.866878  267684 system_pods.go:61] "etcd-functional-240388" [b25d5f2b-38a8-43f6-a9ca-650e1080eddf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 19:33:49.866884  267684 system_pods.go:61] "kube-apiserver-functional-240388" [f6453f94-5276-4e95-9449-699193d4b24c] Pending
	I1217 19:33:49.866893  267684 system_pods.go:61] "kube-controller-manager-functional-240388" [0582fe42-e649-424f-8850-7fbbffcaa22e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 19:33:49.866903  267684 system_pods.go:61] "kube-proxy-9b4xt" [74afb855-c8bc-4697-ae99-f445db36b930] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 19:33:49.866909  267684 system_pods.go:61] "kube-scheduler-functional-240388" [40e6e45c-16f3-41e6-81ea-3e8b63efbd54] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 19:33:49.866913  267684 system_pods.go:61] "storage-provisioner" [377236c5-a7a8-4bb5-834d-3140d3393035] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:33:49.866920  267684 system_pods.go:74] duration metric: took 5.094555ms to wait for pod list to return data ...
	I1217 19:33:49.866929  267684 node_conditions.go:102] verifying NodePressure condition ...
	I1217 19:33:49.872686  267684 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1217 19:33:49.872705  267684 node_conditions.go:123] node cpu capacity is 2
	I1217 19:33:49.872722  267684 node_conditions.go:105] duration metric: took 5.787969ms to run NodePressure ...
	I1217 19:33:49.872783  267684 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 19:33:50.179374  267684 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1217 19:33:50.182281  267684 kubeadm.go:744] kubelet initialised
	I1217 19:33:50.182292  267684 kubeadm.go:745] duration metric: took 2.903208ms waiting for restarted kubelet to initialise ...
	I1217 19:33:50.182307  267684 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 19:33:50.201867  267684 ops.go:34] apiserver oom_adj: -16
	I1217 19:33:50.201881  267684 kubeadm.go:602] duration metric: took 6.970952997s to restartPrimaryControlPlane
	I1217 19:33:50.201892  267684 kubeadm.go:403] duration metric: took 7.000554069s to StartCluster
	I1217 19:33:50.201919  267684 settings.go:142] acquiring lock: {Name:mk9bce2c5cb192383c5c2d74365fff53c608cc17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:33:50.202011  267684 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-255930/kubeconfig
	I1217 19:33:50.203049  267684 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-255930/kubeconfig: {Name:mk8f63919c382cf8d5b565d23aa50d046bd25197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:33:50.203354  267684 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 19:33:50.203438  267684 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 19:33:50.203528  267684 addons.go:70] Setting storage-provisioner=true in profile "functional-240388"
	I1217 19:33:50.203546  267684 addons.go:239] Setting addon storage-provisioner=true in "functional-240388"
	W1217 19:33:50.203553  267684 addons.go:248] addon storage-provisioner should already be in state true
	I1217 19:33:50.203581  267684 host.go:66] Checking if "functional-240388" exists ...
	I1217 19:33:50.203581  267684 config.go:182] Loaded profile config "functional-240388": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1217 19:33:50.203574  267684 addons.go:70] Setting default-storageclass=true in profile "functional-240388"
	I1217 19:33:50.203616  267684 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-240388"
	I1217 19:33:50.206087  267684 addons.go:239] Setting addon default-storageclass=true in "functional-240388"
	W1217 19:33:50.206095  267684 addons.go:248] addon default-storageclass should already be in state true
	I1217 19:33:50.206113  267684 host.go:66] Checking if "functional-240388" exists ...
	I1217 19:33:50.207362  267684 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 19:33:50.207371  267684 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 19:33:50.209580  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:50.209815  267684 out.go:179] * Verifying Kubernetes components...
	I1217 19:33:50.209822  267684 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 19:33:50.209953  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:50.209970  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:50.210109  267684 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/functional-240388/id_rsa Username:docker}
	I1217 19:33:50.211031  267684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:33:50.211042  267684 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 19:33:50.211049  267684 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 19:33:50.213015  267684 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:50.213326  267684 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
	I1217 19:33:50.213338  267684 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
	I1217 19:33:50.213455  267684 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/functional-240388/id_rsa Username:docker}
	I1217 19:33:50.474258  267684 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 19:33:50.505661  267684 node_ready.go:35] waiting up to 6m0s for node "functional-240388" to be "Ready" ...
	I1217 19:33:50.509464  267684 node_ready.go:49] node "functional-240388" is "Ready"
	I1217 19:33:50.509481  267684 node_ready.go:38] duration metric: took 3.795113ms for node "functional-240388" to be "Ready" ...
	I1217 19:33:50.509497  267684 api_server.go:52] waiting for apiserver process to appear ...
	I1217 19:33:50.509549  267684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:33:50.534536  267684 api_server.go:72] duration metric: took 331.145216ms to wait for apiserver process to appear ...
	I1217 19:33:50.534563  267684 api_server.go:88] waiting for apiserver healthz status ...
	I1217 19:33:50.534581  267684 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8441/healthz ...
	I1217 19:33:50.548651  267684 api_server.go:279] https://192.168.39.22:8441/healthz returned 200:
	ok
	I1217 19:33:50.550644  267684 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 19:33:50.550659  267684 api_server.go:131] duration metric: took 16.091059ms to wait for apiserver health ...
	I1217 19:33:50.550667  267684 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 19:33:50.565804  267684 system_pods.go:59] 7 kube-system pods found
	I1217 19:33:50.565825  267684 system_pods.go:61] "coredns-7d764666f9-p2jc7" [463dfe4a-5f2b-4d8b-969f-3288b215bcba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:33:50.565830  267684 system_pods.go:61] "etcd-functional-240388" [b25d5f2b-38a8-43f6-a9ca-650e1080eddf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 19:33:50.565834  267684 system_pods.go:61] "kube-apiserver-functional-240388" [f6453f94-5276-4e95-9449-699193d4b24c] Pending
	I1217 19:33:50.565838  267684 system_pods.go:61] "kube-controller-manager-functional-240388" [0582fe42-e649-424f-8850-7fbbffcaa22e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 19:33:50.565842  267684 system_pods.go:61] "kube-proxy-9b4xt" [74afb855-c8bc-4697-ae99-f445db36b930] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 19:33:50.565846  267684 system_pods.go:61] "kube-scheduler-functional-240388" [40e6e45c-16f3-41e6-81ea-3e8b63efbd54] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 19:33:50.565850  267684 system_pods.go:61] "storage-provisioner" [377236c5-a7a8-4bb5-834d-3140d3393035] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:33:50.565855  267684 system_pods.go:74] duration metric: took 15.183886ms to wait for pod list to return data ...
	I1217 19:33:50.565862  267684 default_sa.go:34] waiting for default service account to be created ...
	I1217 19:33:50.570724  267684 default_sa.go:45] found service account: "default"
	I1217 19:33:50.570738  267684 default_sa.go:55] duration metric: took 4.870957ms for default service account to be created ...
	I1217 19:33:50.570746  267684 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 19:33:50.574791  267684 system_pods.go:86] 7 kube-system pods found
	I1217 19:33:50.574808  267684 system_pods.go:89] "coredns-7d764666f9-p2jc7" [463dfe4a-5f2b-4d8b-969f-3288b215bcba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:33:50.574815  267684 system_pods.go:89] "etcd-functional-240388" [b25d5f2b-38a8-43f6-a9ca-650e1080eddf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 19:33:50.574820  267684 system_pods.go:89] "kube-apiserver-functional-240388" [f6453f94-5276-4e95-9449-699193d4b24c] Pending
	I1217 19:33:50.574825  267684 system_pods.go:89] "kube-controller-manager-functional-240388" [0582fe42-e649-424f-8850-7fbbffcaa22e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 19:33:50.574829  267684 system_pods.go:89] "kube-proxy-9b4xt" [74afb855-c8bc-4697-ae99-f445db36b930] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 19:33:50.574833  267684 system_pods.go:89] "kube-scheduler-functional-240388" [40e6e45c-16f3-41e6-81ea-3e8b63efbd54] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 19:33:50.574836  267684 system_pods.go:89] "storage-provisioner" [377236c5-a7a8-4bb5-834d-3140d3393035] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:33:50.574857  267684 retry.go:31] will retry after 274.467854ms: missing components: kube-apiserver
	I1217 19:33:50.586184  267684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 19:33:50.600769  267684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 19:33:50.854208  267684 system_pods.go:86] 7 kube-system pods found
	I1217 19:33:50.854237  267684 system_pods.go:89] "coredns-7d764666f9-p2jc7" [463dfe4a-5f2b-4d8b-969f-3288b215bcba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:33:50.854243  267684 system_pods.go:89] "etcd-functional-240388" [b25d5f2b-38a8-43f6-a9ca-650e1080eddf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 19:33:50.854253  267684 system_pods.go:89] "kube-apiserver-functional-240388" [f6453f94-5276-4e95-9449-699193d4b24c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 19:33:50.854261  267684 system_pods.go:89] "kube-controller-manager-functional-240388" [0582fe42-e649-424f-8850-7fbbffcaa22e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 19:33:50.854267  267684 system_pods.go:89] "kube-proxy-9b4xt" [74afb855-c8bc-4697-ae99-f445db36b930] Running
	I1217 19:33:50.854274  267684 system_pods.go:89] "kube-scheduler-functional-240388" [40e6e45c-16f3-41e6-81ea-3e8b63efbd54] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 19:33:50.854278  267684 system_pods.go:89] "storage-provisioner" [377236c5-a7a8-4bb5-834d-3140d3393035] Running
	I1217 19:33:50.854286  267684 system_pods.go:126] duration metric: took 283.535465ms to wait for k8s-apps to be running ...
	I1217 19:33:50.854295  267684 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 19:33:50.854411  267684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:33:51.435751  267684 system_svc.go:56] duration metric: took 581.446ms WaitForService to wait for kubelet
	I1217 19:33:51.435770  267684 kubeadm.go:587] duration metric: took 1.23238848s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 19:33:51.435787  267684 node_conditions.go:102] verifying NodePressure condition ...
	I1217 19:33:51.439212  267684 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1217 19:33:51.439232  267684 node_conditions.go:123] node cpu capacity is 2
	I1217 19:33:51.439249  267684 node_conditions.go:105] duration metric: took 3.454979ms to run NodePressure ...
	I1217 19:33:51.439262  267684 start.go:242] waiting for startup goroutines ...
	I1217 19:33:51.444332  267684 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 19:33:51.445791  267684 addons.go:530] duration metric: took 1.242367945s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 19:33:51.445830  267684 start.go:247] waiting for cluster config update ...
	I1217 19:33:51.445844  267684 start.go:256] writing updated cluster config ...
	I1217 19:33:51.446209  267684 ssh_runner.go:195] Run: rm -f paused
	I1217 19:33:51.452231  267684 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 19:33:51.457011  267684 pod_ready.go:83] waiting for pod "coredns-7d764666f9-p2jc7" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 19:33:53.464153  267684 pod_ready.go:104] pod "coredns-7d764666f9-p2jc7" is not "Ready", error: <nil>
	W1217 19:33:55.962739  267684 pod_ready.go:104] pod "coredns-7d764666f9-p2jc7" is not "Ready", error: <nil>
	I1217 19:33:57.974056  267684 pod_ready.go:94] pod "coredns-7d764666f9-p2jc7" is "Ready"
	I1217 19:33:57.974098  267684 pod_ready.go:86] duration metric: took 6.517047688s for pod "coredns-7d764666f9-p2jc7" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:33:57.977362  267684 pod_ready.go:83] waiting for pod "etcd-functional-240388" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:33:59.983912  267684 pod_ready.go:94] pod "etcd-functional-240388" is "Ready"
	I1217 19:33:59.983929  267684 pod_ready.go:86] duration metric: took 2.00655136s for pod "etcd-functional-240388" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:33:59.986470  267684 pod_ready.go:83] waiting for pod "kube-apiserver-functional-240388" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:33:59.990980  267684 pod_ready.go:94] pod "kube-apiserver-functional-240388" is "Ready"
	I1217 19:33:59.991000  267684 pod_ready.go:86] duration metric: took 4.511047ms for pod "kube-apiserver-functional-240388" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:33:59.993459  267684 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-240388" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:33:59.997937  267684 pod_ready.go:94] pod "kube-controller-manager-functional-240388" is "Ready"
	I1217 19:33:59.997954  267684 pod_ready.go:86] duration metric: took 4.482221ms for pod "kube-controller-manager-functional-240388" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:34:00.000145  267684 pod_ready.go:83] waiting for pod "kube-proxy-9b4xt" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:34:00.361563  267684 pod_ready.go:94] pod "kube-proxy-9b4xt" is "Ready"
	I1217 19:34:00.361586  267684 pod_ready.go:86] duration metric: took 361.42797ms for pod "kube-proxy-9b4xt" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:34:00.561139  267684 pod_ready.go:83] waiting for pod "kube-scheduler-functional-240388" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 19:34:02.566440  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:04.567079  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:06.568433  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:09.069006  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:11.568213  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:14.067100  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:16.067895  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:18.068093  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:20.568800  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:23.067669  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:25.068181  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:27.069140  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:29.567682  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:32.067944  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:34.567970  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:36.568410  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:38.568875  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:41.067285  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:43.067582  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:45.068991  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:47.568211  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:50.067798  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:52.567110  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:55.068867  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:57.567142  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:34:59.567565  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:02.067195  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:04.067302  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:06.068868  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:08.568773  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:11.067299  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:13.067770  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:15.567220  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:17.567959  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:20.067787  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:22.068724  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:24.568343  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:26.568770  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:29.067710  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:31.068145  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:33.568905  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:36.067452  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:38.567405  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:41.067223  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:43.068236  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:45.566660  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:47.568023  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:50.067628  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:52.067691  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:54.566681  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:56.567070  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:35:58.567205  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:00.567585  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:03.066911  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:05.067697  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:07.567267  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:10.070065  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:12.567282  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:15.067336  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:17.067923  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:19.068236  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:21.567784  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:23.568193  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:26.068627  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:28.568621  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:31.067696  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:33.067753  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:35.567389  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:37.568393  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:40.067868  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:42.068526  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:44.568754  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:47.066209  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:49.067317  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:51.067920  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:53.067951  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:55.568233  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:36:58.067178  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:00.568182  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:02.568983  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:05.068318  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:07.567541  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:10.068359  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:12.567929  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:15.067934  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:17.568019  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:19.568147  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:22.067853  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:24.568939  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:27.067623  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:29.068086  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:31.068905  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:33.567497  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:35.571222  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:38.068303  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:40.566716  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:42.567116  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:45.066812  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:47.069637  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	W1217 19:37:49.567275  267684 pod_ready.go:104] pod "kube-scheduler-functional-240388" is not "Ready", error: <nil>
	I1217 19:37:51.452903  267684 pod_ready.go:86] duration metric: took 3m50.891734106s for pod "kube-scheduler-functional-240388" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 19:37:51.452931  267684 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-scheduler" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1217 19:37:51.452945  267684 pod_ready.go:40] duration metric: took 4m0.000689077s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 19:37:51.455139  267684 out.go:203] 
	W1217 19:37:51.456702  267684 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1217 19:37:51.457988  267684 out.go:203] 
	
	
	==> Docker <==
	Dec 17 19:33:43 functional-240388 dockerd[7924]: time="2025-12-17T19:33:43.473198163Z" level=info msg="ignoring event" container=6a5939c9502dbcd502681138b069c09eae18b9d8938bb170983b90cf6c57d283 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 17 19:33:44 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:44Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-p2jc7_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"1acc72667ffecb9b1f9c1462cd1503c88f603f0996e69135f49a6c923e49ea3e\""
	Dec 17 19:33:45 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:45Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"bfcb4221d4a7ef30dae2a66cd597250594cae53eecf1156b58fc897c3db4adb2\". Proceed without further sandbox information."
	Dec 17 19:33:45 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:45Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"bc39802d69185bb3fe4387bc88803990af8a467199a407d8ff3139edd897ad31\". Proceed without further sandbox information."
	Dec 17 19:33:45 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:45Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"de691c17fad0893ec8eda10a693c32635b6669ce118c79ef0ba02c521d246106\". Proceed without further sandbox information."
	Dec 17 19:33:45 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:45Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"a7ce08614779dcde6425e02aa30ac8921f91945d930f48ac401c8f72dfd73c97\". Proceed without further sandbox information."
	Dec 17 19:33:45 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:45Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"c9f3d097d04d9a9046d428a65af3812c3ab56cc0bffba2a9ad0f66a88bfc4afa\". Proceed without further sandbox information."
	Dec 17 19:33:45 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/99980b89e2f75344098f5ed8a2c5d8550ce5ecb31a633fbe4a58540f6365e83f/resolv.conf as [nameserver 192.168.122.1]"
	Dec 17 19:33:45 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b52517caf11d99d0e709e8bb4270cba47fe414e0d2abdc432798bc350b43f823/resolv.conf as [nameserver 192.168.122.1]"
	Dec 17 19:33:45 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2eee46eecde457c617f877978293fdd4990ba0015ac72f57962f009d7372900e/resolv.conf as [nameserver 192.168.122.1]"
	Dec 17 19:33:45 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c56608b9e4727946ea7e1d159d57918d83a65898cb2bbe22e2de60e264d7921a/resolv.conf as [nameserver 192.168.122.1]"
	Dec 17 19:33:45 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:45Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-p2jc7_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"1acc72667ffecb9b1f9c1462cd1503c88f603f0996e69135f49a6c923e49ea3e\""
	Dec 17 19:33:46 functional-240388 dockerd[7924]: time="2025-12-17T19:33:46.635036806Z" level=info msg="ignoring event" container=9ddcfd03def09fa056533c5127dfcbdc2e222868d94c39bba44f3d1b1c432fdb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 17 19:33:48 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:48Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Dec 17 19:33:49 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a51d46f16da861c5c8c4221920bfa125a4f956a42f6e7194de3b5d72ba8aa080/resolv.conf as [nameserver 192.168.122.1]"
	Dec 17 19:33:49 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3c1a68a5a63f99b2b353b54a2893bc8ba4f54b12c5da116a989ab7f96dcc78cb/resolv.conf as [nameserver 192.168.122.1]"
	Dec 17 19:33:49 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:33:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/28fbe3560cc6992430231caf18aa8e1adaa0a1b5412dbfe4e46274db26a4284a/resolv.conf as [nameserver 192.168.122.1]"
	Dec 17 19:33:58 functional-240388 dockerd[7924]: time="2025-12-17T19:33:58.156995422Z" level=info msg="ignoring event" container=df58efeab0f3eac3bed6fbf1084e04f8510bc53762ed05d42fdf793c4585f427 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 17 19:34:25 functional-240388 dockerd[7924]: time="2025-12-17T19:34:25.215727670Z" level=info msg="ignoring event" container=d78244b593944cd7e0e4f16b9b888f48bc7129b768d4e5e0bf62b079bec06dce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 17 19:34:54 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:34:54Z" level=error msg="error getting RW layer size for container ID '287f41e5445c4de7ef61e7b0c9e3722323849e5b526ad5d9226062c00d21909b': Error response from daemon: No such container: 287f41e5445c4de7ef61e7b0c9e3722323849e5b526ad5d9226062c00d21909b"
	Dec 17 19:34:54 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:34:54Z" level=error msg="Set backoffDuration to : 1m0s for container ID '287f41e5445c4de7ef61e7b0c9e3722323849e5b526ad5d9226062c00d21909b'"
	Dec 17 19:35:16 functional-240388 dockerd[7924]: time="2025-12-17T19:35:16.200016889Z" level=info msg="ignoring event" container=bda7b0ce1a09acea20f60556583a189d78353284e5aa024fe014450268259e70 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 17 19:35:24 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:35:24Z" level=error msg="error getting RW layer size for container ID 'd78244b593944cd7e0e4f16b9b888f48bc7129b768d4e5e0bf62b079bec06dce': Error response from daemon: No such container: d78244b593944cd7e0e4f16b9b888f48bc7129b768d4e5e0bf62b079bec06dce"
	Dec 17 19:35:24 functional-240388 cri-dockerd[8838]: time="2025-12-17T19:35:24Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd78244b593944cd7e0e4f16b9b888f48bc7129b768d4e5e0bf62b079bec06dce'"
	Dec 17 19:36:55 functional-240388 dockerd[7924]: time="2025-12-17T19:36:55.174600498Z" level=info msg="ignoring event" container=c389d6e9b8b25fe06ee328f3917cf450d84ad5662adb95ce9d71dd903fa1b18d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c389d6e9b8b25       73f80cdc073da       59 seconds ago      Exited              kube-scheduler            8                   c56608b9e4727       kube-scheduler-functional-240388            kube-system
	d3cd2a6021efd       af0321f3a4f38       4 minutes ago       Running             kube-proxy                4                   28fbe3560cc69       kube-proxy-9b4xt                            kube-system
	bbeee7ea3c766       6e38f40d628db       4 minutes ago       Running             storage-provisioner       3                   3c1a68a5a63f9       storage-provisioner                         kube-system
	720ef2e2e28ba       aa5e3ebc0dfed       4 minutes ago       Running             coredns                   3                   a51d46f16da86       coredns-7d764666f9-p2jc7                    kube-system
	060eef1436e87       5032a56602e1b       4 minutes ago       Running             kube-controller-manager   4                   2eee46eecde45       kube-controller-manager-functional-240388   kube-system
	7ed9aced06540       0a108f7189562       4 minutes ago       Running             etcd                      3                   b52517caf11d9       etcd-functional-240388                      kube-system
	5e0236591f856       58865405a13bc       4 minutes ago       Running             kube-apiserver            0                   99980b89e2f75       kube-apiserver-functional-240388            kube-system
	8512c4ee52340       af0321f3a4f38       4 minutes ago       Exited              kube-proxy                3                   cf40faa6d26bc       kube-proxy-9b4xt                            kube-system
	6a5939c9502db       5032a56602e1b       4 minutes ago       Exited              kube-controller-manager   3                   67f00add7f903       kube-controller-manager-functional-240388   kube-system
	2eee0e13328f5       aa5e3ebc0dfed       5 minutes ago       Exited              coredns                   2                   16980b72586cd       coredns-7d764666f9-p2jc7                    kube-system
	9b648a420d5f2       6e38f40d628db       5 minutes ago       Exited              storage-provisioner       2                   9fca1633c22ad       storage-provisioner                         kube-system
	bb348be6d197a       0a108f7189562       5 minutes ago       Exited              etcd                      2                   2a94d92ddfbf2       etcd-functional-240388                      kube-system
	
	
	==> coredns [2eee0e13328f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:38482 - 26281 "HINFO IN 7082336438510137172.6908441513580825570. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027199885s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [720ef2e2e28b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:60164 - 54627 "HINFO IN 8701000521322517761.9006979715444964387. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.118925994s
	
	
	==> describe nodes <==
	Name:               functional-240388
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-240388
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=functional-240388
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T19_31_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 19:31:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-240388
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 19:37:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 19:33:48 +0000   Wed, 17 Dec 2025 19:31:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 19:33:48 +0000   Wed, 17 Dec 2025 19:31:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 19:33:48 +0000   Wed, 17 Dec 2025 19:31:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 19:33:48 +0000   Wed, 17 Dec 2025 19:31:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    functional-240388
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca73804dacda4148bcecb3c8c2b68c32
	  System UUID:                ca73804d-acda-4148-bcec-b3c8c2b68c32
	  Boot ID:                    23b31f3c-6ff1-49c8-bae1-b3e21418a3ce
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.2
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-p2jc7                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     6m6s
	  kube-system                 etcd-functional-240388                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         6m13s
	  kube-system                 kube-apiserver-functional-240388             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-controller-manager-functional-240388    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-proxy-9b4xt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-scheduler-functional-240388             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  6m7s  node-controller  Node functional-240388 event: Registered Node functional-240388 in Controller
	  Normal  RegisteredNode  5m9s  node-controller  Node functional-240388 event: Registered Node functional-240388 in Controller
	  Normal  RegisteredNode  4m2s  node-controller  Node functional-240388 event: Registered Node functional-240388 in Controller
	
	
	==> dmesg <==
	[  +0.280021] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.109752] kauditd_printk_skb: 345 callbacks suppressed
	[  +0.098255] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.156272] kauditd_printk_skb: 165 callbacks suppressed
	[  +0.603143] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.025033] kauditd_printk_skb: 219 callbacks suppressed
	[Dec17 19:32] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.510593] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.045349] kauditd_printk_skb: 56 callbacks suppressed
	[  +2.055200] kauditd_printk_skb: 400 callbacks suppressed
	[  +0.128571] kauditd_printk_skb: 107 callbacks suppressed
	[  +1.610657] kauditd_printk_skb: 150 callbacks suppressed
	[Dec17 19:33] kauditd_printk_skb: 2 callbacks suppressed
	[ +20.173189] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.324441] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.006613] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +5.161832] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.933223] kauditd_printk_skb: 410 callbacks suppressed
	[  +0.335071] kauditd_printk_skb: 228 callbacks suppressed
	[  +6.261034] kauditd_printk_skb: 2 callbacks suppressed
	[Dec17 19:34] kauditd_printk_skb: 34 callbacks suppressed
	[Dec17 19:35] kauditd_printk_skb: 6 callbacks suppressed
	[Dec17 19:36] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [7ed9aced0654] <==
	{"level":"info","ts":"2025-12-17T19:33:46.239077Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"eaed0234649c774e","local-member-id":"cde0bb267fc4e559","added-peer-id":"cde0bb267fc4e559","added-peer-peer-urls":["https://192.168.39.22:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-17T19:33:46.239215Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"eaed0234649c774e","local-member-id":"cde0bb267fc4e559","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-17T19:33:46.235081Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-17T19:33:46.235098Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.22:2380"}
	{"level":"info","ts":"2025-12-17T19:33:46.239296Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.22:2380"}
	{"level":"info","ts":"2025-12-17T19:33:46.245673Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"cde0bb267fc4e559","initial-advertise-peer-urls":["https://192.168.39.22:2380"],"listen-peer-urls":["https://192.168.39.22:2380"],"advertise-client-urls":["https://192.168.39.22:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.22:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-17T19:33:46.245695Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-17T19:33:46.806159Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"cde0bb267fc4e559 is starting a new election at term 4"}
	{"level":"info","ts":"2025-12-17T19:33:46.806211Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"cde0bb267fc4e559 became pre-candidate at term 4"}
	{"level":"info","ts":"2025-12-17T19:33:46.806249Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cde0bb267fc4e559 received MsgPreVoteResp from cde0bb267fc4e559 at term 4"}
	{"level":"info","ts":"2025-12-17T19:33:46.806259Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cde0bb267fc4e559 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T19:33:46.806272Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"cde0bb267fc4e559 became candidate at term 5"}
	{"level":"info","ts":"2025-12-17T19:33:46.808150Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"cde0bb267fc4e559 received MsgVoteResp from cde0bb267fc4e559 at term 5"}
	{"level":"info","ts":"2025-12-17T19:33:46.808241Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"cde0bb267fc4e559 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T19:33:46.808263Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"cde0bb267fc4e559 became leader at term 5"}
	{"level":"info","ts":"2025-12-17T19:33:46.808271Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: cde0bb267fc4e559 elected leader cde0bb267fc4e559 at term 5"}
	{"level":"info","ts":"2025-12-17T19:33:46.811093Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"cde0bb267fc4e559","local-member-attributes":"{Name:functional-240388 ClientURLs:[https://192.168.39.22:2379]}","cluster-id":"eaed0234649c774e","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T19:33:46.811141Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T19:33:46.811333Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T19:33:46.812913Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T19:33:46.813647Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T19:33:46.814214Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T19:33:46.814326Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T19:33:46.815065Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-17T19:33:46.815938Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.22:2379"}
	
	
	==> etcd [bb348be6d197] <==
	{"level":"info","ts":"2025-12-17T19:32:39.703026Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T19:32:39.704425Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T19:32:39.704604Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T19:32:39.705307Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T19:32:39.706437Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T19:32:39.709945Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.22:2379"}
	{"level":"info","ts":"2025-12-17T19:32:39.711194Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-17T19:33:29.523469Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-17T19:33:29.523619Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-240388","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.22:2380"],"advertise-client-urls":["https://192.168.39.22:2379"]}
	{"level":"error","ts":"2025-12-17T19:33:29.524018Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T19:33:36.526356Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T19:33:36.535569Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T19:33:36.535641Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"cde0bb267fc4e559","current-leader-member-id":"cde0bb267fc4e559"}
	{"level":"info","ts":"2025-12-17T19:33:36.535804Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-17T19:33:36.535839Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-17T19:33:36.536005Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T19:33:36.536165Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T19:33:36.536303Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-17T19:33:36.536584Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.22:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T19:33:36.536670Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.22:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T19:33:36.536756Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.22:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T19:33:36.540108Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.22:2380"}
	{"level":"error","ts":"2025-12-17T19:33:36.540158Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.22:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T19:33:36.540212Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.22:2380"}
	{"level":"info","ts":"2025-12-17T19:33:36.540220Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-240388","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.22:2380"],"advertise-client-urls":["https://192.168.39.22:2379"]}
	
	
	==> kernel <==
	 19:37:53 up 6 min,  0 users,  load average: 0.23, 0.54, 0.32
	Linux functional-240388 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [5e0236591f85] <==
	I1217 19:33:48.193986       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:48.195040       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 19:33:48.195204       1 aggregator.go:187] initial CRD sync complete...
	I1217 19:33:48.195297       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 19:33:48.195312       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 19:33:48.195348       1 cache.go:39] Caches are synced for autoregister controller
	I1217 19:33:48.196861       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 19:33:48.196978       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 19:33:48.199599       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 19:33:48.199859       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 19:33:48.199994       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 19:33:48.203065       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:48.203116       1 policy_source.go:248] refreshing policies
	E1217 19:33:48.203778       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 19:33:48.205324       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 19:33:48.285472       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 19:33:48.708639       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 19:33:49.014823       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 19:33:50.032084       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 19:33:50.096678       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 19:33:50.147339       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 19:33:50.159097       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 19:33:51.583557       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 19:33:51.633623       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 19:33:51.785119       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [060eef1436e8] <==
	I1217 19:33:51.312115       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.312158       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.312172       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.312197       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.312243       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.317055       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.317108       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.317196       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.317219       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.317261       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.320834       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.326367       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.327452       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.327578       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.327778       1 range_allocator.go:177] "Sending events to api server"
	I1217 19:33:51.327884       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1217 19:33:51.328405       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 19:33:51.328621       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.329112       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.344055       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 19:33:51.367478       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.413161       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:51.413193       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 19:33:51.413198       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 19:33:51.444447       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-controller-manager [6a5939c9502d] <==
	I1217 19:33:42.779068       1 serving.go:386] Generated self-signed cert in-memory
	I1217 19:33:42.803815       1 controllermanager.go:189] "Starting" version="v1.35.0-rc.1"
	I1217 19:33:42.805416       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 19:33:42.809073       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1217 19:33:42.809222       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1217 19:33:42.809956       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 19:33:42.811439       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-proxy [8512c4ee5234] <==
	I1217 19:33:42.844139       1 server_linux.go:53] "Using iptables proxy"
	I1217 19:33:42.908637       1 shared_informer.go:370] "Waiting for caches to sync"
	
	
	==> kube-proxy [d3cd2a6021ef] <==
	I1217 19:33:50.115201       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 19:33:50.219503       1 shared_informer.go:377] "Caches are synced"
	I1217 19:33:50.219553       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.22"]
	E1217 19:33:50.219660       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 19:33:50.358153       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 19:33:50.358275       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 19:33:50.358324       1 server_linux.go:136] "Using iptables Proxier"
	I1217 19:33:50.370513       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 19:33:50.370948       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1217 19:33:50.371342       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 19:33:50.382699       1 config.go:200] "Starting service config controller"
	I1217 19:33:50.382951       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 19:33:50.382986       1 config.go:106] "Starting endpoint slice config controller"
	I1217 19:33:50.383135       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 19:33:50.383323       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 19:33:50.383345       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 19:33:50.398739       1 config.go:309] "Starting node config controller"
	I1217 19:33:50.398767       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 19:33:50.398774       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 19:33:50.484896       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 19:33:50.484936       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 19:33:50.484967       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c389d6e9b8b2] <==
	I1217 19:36:55.152302       1 serving.go:386] Generated self-signed cert in-memory
	E1217 19:36:55.157564       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	
	
	==> kubelet <==
	Dec 17 19:36:29 functional-240388 kubelet[10024]: E1217 19:36:29.722751   10024 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-240388" containerName="etcd"
	Dec 17 19:36:30 functional-240388 kubelet[10024]: E1217 19:36:30.722813   10024 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-240388" containerName="kube-controller-manager"
	Dec 17 19:36:33 functional-240388 kubelet[10024]: E1217 19:36:33.723188   10024 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-240388" containerName="kube-apiserver"
	Dec 17 19:36:44 functional-240388 kubelet[10024]: E1217 19:36:44.723669   10024 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-p2jc7" containerName="coredns"
	Dec 17 19:36:54 functional-240388 kubelet[10024]: E1217 19:36:54.722966   10024 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-240388" containerName="kube-scheduler"
	Dec 17 19:36:54 functional-240388 kubelet[10024]: I1217 19:36:54.723058   10024 scope.go:122] "RemoveContainer" containerID="bda7b0ce1a09acea20f60556583a189d78353284e5aa024fe014450268259e70"
	Dec 17 19:36:55 functional-240388 kubelet[10024]: E1217 19:36:55.227238   10024 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-240388" containerName="kube-scheduler"
	Dec 17 19:36:55 functional-240388 kubelet[10024]: I1217 19:36:55.227265   10024 scope.go:122] "RemoveContainer" containerID="c389d6e9b8b25fe06ee328f3917cf450d84ad5662adb95ce9d71dd903fa1b18d"
	Dec 17 19:36:55 functional-240388 kubelet[10024]: E1217 19:36:55.227456   10024 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-functional-240388_kube-system(a7752d693a1b0ca5f7f99c49d4c4d9a3)\"" pod="kube-system/kube-scheduler-functional-240388" podUID="a7752d693a1b0ca5f7f99c49d4c4d9a3"
	Dec 17 19:36:56 functional-240388 kubelet[10024]: I1217 19:36:56.250990   10024 scope.go:122] "RemoveContainer" containerID="bda7b0ce1a09acea20f60556583a189d78353284e5aa024fe014450268259e70"
	Dec 17 19:36:56 functional-240388 kubelet[10024]: E1217 19:36:56.251196   10024 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-240388" containerName="kube-scheduler"
	Dec 17 19:36:56 functional-240388 kubelet[10024]: I1217 19:36:56.251218   10024 scope.go:122] "RemoveContainer" containerID="c389d6e9b8b25fe06ee328f3917cf450d84ad5662adb95ce9d71dd903fa1b18d"
	Dec 17 19:36:56 functional-240388 kubelet[10024]: E1217 19:36:56.252554   10024 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-functional-240388_kube-system(a7752d693a1b0ca5f7f99c49d4c4d9a3)\"" pod="kube-system/kube-scheduler-functional-240388" podUID="a7752d693a1b0ca5f7f99c49d4c4d9a3"
	Dec 17 19:36:57 functional-240388 kubelet[10024]: E1217 19:36:57.266982   10024 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-240388" containerName="kube-scheduler"
	Dec 17 19:36:57 functional-240388 kubelet[10024]: I1217 19:36:57.267020   10024 scope.go:122] "RemoveContainer" containerID="c389d6e9b8b25fe06ee328f3917cf450d84ad5662adb95ce9d71dd903fa1b18d"
	Dec 17 19:36:57 functional-240388 kubelet[10024]: E1217 19:36:57.267174   10024 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-functional-240388_kube-system(a7752d693a1b0ca5f7f99c49d4c4d9a3)\"" pod="kube-system/kube-scheduler-functional-240388" podUID="a7752d693a1b0ca5f7f99c49d4c4d9a3"
	Dec 17 19:36:59 functional-240388 kubelet[10024]: E1217 19:36:59.912508   10024 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-240388" containerName="kube-scheduler"
	Dec 17 19:36:59 functional-240388 kubelet[10024]: I1217 19:36:59.912547   10024 scope.go:122] "RemoveContainer" containerID="c389d6e9b8b25fe06ee328f3917cf450d84ad5662adb95ce9d71dd903fa1b18d"
	Dec 17 19:36:59 functional-240388 kubelet[10024]: E1217 19:36:59.912725   10024 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-functional-240388_kube-system(a7752d693a1b0ca5f7f99c49d4c4d9a3)\"" pod="kube-system/kube-scheduler-functional-240388" podUID="a7752d693a1b0ca5f7f99c49d4c4d9a3"
	Dec 17 19:37:04 functional-240388 kubelet[10024]: E1217 19:37:04.157154   10024 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-240388" containerName="kube-scheduler"
	Dec 17 19:37:04 functional-240388 kubelet[10024]: I1217 19:37:04.157203   10024 scope.go:122] "RemoveContainer" containerID="c389d6e9b8b25fe06ee328f3917cf450d84ad5662adb95ce9d71dd903fa1b18d"
	Dec 17 19:37:04 functional-240388 kubelet[10024]: E1217 19:37:04.157445   10024 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-functional-240388_kube-system(a7752d693a1b0ca5f7f99c49d4c4d9a3)\"" pod="kube-system/kube-scheduler-functional-240388" podUID="a7752d693a1b0ca5f7f99c49d4c4d9a3"
	Dec 17 19:37:34 functional-240388 kubelet[10024]: E1217 19:37:34.724113   10024 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-240388" containerName="kube-apiserver"
	Dec 17 19:37:38 functional-240388 kubelet[10024]: E1217 19:37:38.723737   10024 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-240388" containerName="kube-controller-manager"
	Dec 17 19:37:50 functional-240388 kubelet[10024]: E1217 19:37:50.723446   10024 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-240388" containerName="etcd"
	
	
	==> storage-provisioner [9b648a420d5f] <==
	W1217 19:33:03.593135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:05.598686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:05.604262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:07.608150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:07.613505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:09.616685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:09.625987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:11.628971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:11.634196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:13.638095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:13.650817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:15.653822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:15.659165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:17.662761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:17.671275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:19.675136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:19.680125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:21.685015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:21.693876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:23.697502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:23.702567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:25.706350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:25.714991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:27.719834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:33:27.724945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [bbeee7ea3c76] <==
	W1217 19:37:28.460304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:30.464230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:30.469876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:32.472633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:32.481327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:34.484750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:34.490191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:36.493922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:36.502058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:38.505748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:38.510948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:40.514429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:40.522307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:42.531030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:42.538013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:44.541135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:44.550203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:46.554820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:46.561173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:48.564580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:48.569437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:50.572296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:50.581205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:52.586052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:37:52.592816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-240388 -n functional-240388
helpers_test.go:270: (dbg) Run:  kubectl --context functional-240388 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (1.57s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-021194 ssh "df -t ext4 /data | grep /data"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-021194 ssh "df -t ext4 /data | grep /data": context deadline exceeded (3.482µs)
iso_test.go:99: failed to verify existence of "/data" mount. args "out/minikube-linux-amd64 -p guest-021194 ssh \"df -t ext4 /data | grep /data\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//data (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-021194 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-021194 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker": context deadline exceeded (315ns)
iso_test.go:99: failed to verify existence of "/var/lib/docker" mount. args "out/minikube-linux-amd64 -p guest-021194 ssh \"df -t ext4 /var/lib/docker | grep /var/lib/docker\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/docker (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-021194 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-021194 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni": context deadline exceeded (278ns)
iso_test.go:99: failed to verify existence of "/var/lib/cni" mount. args "out/minikube-linux-amd64 -p guest-021194 ssh \"df -t ext4 /var/lib/cni | grep /var/lib/cni\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/cni (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-021194 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-021194 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet": context deadline exceeded (269ns)
iso_test.go:99: failed to verify existence of "/var/lib/kubelet" mount. args "out/minikube-linux-amd64 -p guest-021194 ssh \"df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/kubelet (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-021194 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-021194 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube": context deadline exceeded (394ns)
iso_test.go:99: failed to verify existence of "/var/lib/minikube" mount. args "out/minikube-linux-amd64 -p guest-021194 ssh \"df -t ext4 /var/lib/minikube | grep /var/lib/minikube\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/minikube (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-021194 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-021194 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox": context deadline exceeded (296ns)
iso_test.go:99: failed to verify existence of "/var/lib/toolbox" mount. args "out/minikube-linux-amd64 -p guest-021194 ssh \"df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/toolbox (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-021194 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-021194 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker": context deadline exceeded (299ns)
iso_test.go:99: failed to verify existence of "/var/lib/boot2docker" mount. args "out/minikube-linux-amd64 -p guest-021194 ssh \"df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/boot2docker (0.00s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-021194 ssh "cat /version.json"
iso_test.go:106: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-021194 ssh "cat /version.json": context deadline exceeded (435ns)
iso_test.go:108: failed to read /version.json. args "out/minikube-linux-amd64 -p guest-021194 ssh \"cat /version.json\"": context deadline exceeded
--- FAIL: TestISOImage/VersionJSON (0.00s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-021194 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
iso_test.go:125: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-021194 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'": context deadline exceeded (294ns)
iso_test.go:127: failed to verify existence of "/sys/kernel/btf/vmlinux" file: args "out/minikube-linux-amd64 -p guest-021194 ssh \"test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'\"": context deadline exceeded
iso_test.go:131: expected file "/sys/kernel/btf/vmlinux" to exist, but it does not. BTF types are required for CO-RE eBPF programs; set CONFIG_DEBUG_INFO_BTF in kernel configuration.
--- FAIL: TestISOImage/eBPFSupport (0.00s)
E1217 20:26:50.762668  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/calico-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:26:52.044950  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/calico-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:26:52.242709  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/custom-flannel-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:26:54.606978  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/calico-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (396/452)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 22.29
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.17
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.34.3/json-events 8.42
13 TestDownloadOnly/v1.34.3/preload-exists 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.08
18 TestDownloadOnly/v1.34.3/DeleteAll 0.43
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.38
21 TestDownloadOnly/v1.35.0-rc.1/json-events 9.52
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.17
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.16
30 TestBinaryMirror 1.47
31 TestOffline 79.26
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 203.39
38 TestAddons/serial/Volcano 43.41
40 TestAddons/serial/GCPAuth/Namespaces 0.12
41 TestAddons/serial/GCPAuth/FakeCredentials 9.54
44 TestAddons/parallel/Registry 16.19
45 TestAddons/parallel/RegistryCreds 0.61
46 TestAddons/parallel/Ingress 18.87
47 TestAddons/parallel/InspektorGadget 12.12
48 TestAddons/parallel/MetricsServer 6.67
50 TestAddons/parallel/CSI 54.57
51 TestAddons/parallel/Headlamp 20.78
52 TestAddons/parallel/CloudSpanner 5.79
53 TestAddons/parallel/LocalPath 56.74
54 TestAddons/parallel/NvidiaDevicePlugin 5.38
55 TestAddons/parallel/Yakd 12.05
57 TestAddons/StoppedEnableDisable 14.43
58 TestCertOptions 46.54
59 TestCertExpiration 309.28
60 TestDockerFlags 62.37
61 TestForceSystemdFlag 89.05
62 TestForceSystemdEnv 110.97
67 TestErrorSpam/setup 40.32
68 TestErrorSpam/start 0.38
69 TestErrorSpam/status 0.72
70 TestErrorSpam/pause 1.29
71 TestErrorSpam/unpause 1.51
72 TestErrorSpam/stop 5.86
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 56.78
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 61.91
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.08
83 TestFunctional/serial/CacheCmd/cache/add_remote 4.05
84 TestFunctional/serial/CacheCmd/cache/add_local 1.99
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.07
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.19
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 53.85
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.05
95 TestFunctional/serial/LogsFileCmd 1.05
96 TestFunctional/serial/InvalidService 3.96
98 TestFunctional/parallel/ConfigCmd 0.49
99 TestFunctional/parallel/DashboardCmd 13.84
100 TestFunctional/parallel/DryRun 0.32
101 TestFunctional/parallel/InternationalLanguage 0.16
102 TestFunctional/parallel/StatusCmd 0.95
106 TestFunctional/parallel/ServiceCmdConnect 12.49
107 TestFunctional/parallel/AddonsCmd 0.19
108 TestFunctional/parallel/PersistentVolumeClaim 35.86
110 TestFunctional/parallel/SSHCmd 0.41
111 TestFunctional/parallel/CpCmd 1.27
112 TestFunctional/parallel/MySQL 43.13
113 TestFunctional/parallel/FileSync 0.22
114 TestFunctional/parallel/CertSync 1.28
118 TestFunctional/parallel/NodeLabels 0.07
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.2
122 TestFunctional/parallel/License 1.01
123 TestFunctional/parallel/Version/short 0.07
124 TestFunctional/parallel/Version/components 0.47
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
126 TestFunctional/parallel/MountCmd/any-port 8.38
127 TestFunctional/parallel/ProfileCmd/profile_list 0.41
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
138 TestFunctional/parallel/MountCmd/specific-port 1.61
139 TestFunctional/parallel/MountCmd/VerifyCleanup 1.24
140 TestFunctional/parallel/DockerEnv/bash 0.81
141 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
142 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
143 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
144 TestFunctional/parallel/ImageCommands/ImageListYaml 0.19
145 TestFunctional/parallel/ImageCommands/ImageBuild 4.76
146 TestFunctional/parallel/ImageCommands/Setup 1.87
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.02
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
151 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.89
152 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.56
153 TestFunctional/parallel/ServiceCmd/DeployApp 33.22
154 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.33
155 TestFunctional/parallel/ImageCommands/ImageRemove 0.36
156 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.68
157 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.48
158 TestFunctional/parallel/ServiceCmd/List 1.23
159 TestFunctional/parallel/ServiceCmd/JSONOutput 1.24
160 TestFunctional/parallel/ServiceCmd/HTTPS 0.31
161 TestFunctional/parallel/ServiceCmd/Format 0.33
162 TestFunctional/parallel/ServiceCmd/URL 0.26
163 TestFunctional/delete_echo-server_images 0.04
164 TestFunctional/delete_my-image_image 0.02
165 TestFunctional/delete_minikube_cached_images 0.02
169 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 51.37
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 58.07
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.05
174 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 0.11
177 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 3.99
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 1.93
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.07
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.07
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.19
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.62
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.13
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 0.13
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 0.12
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 0.94
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 0.91
190 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 4.36
192 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.46
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd 16.78
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.28
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.14
196 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 1.08
200 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect 29.52
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.17
202 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 42.32
204 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.39
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 1.22
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 38.14
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.23
208 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 1.22
212 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 0.08
214 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.18
216 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 1.04
217 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.07
218 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.5
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.28
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.25
221 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.25
222 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.2
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 4.39
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 0.88
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv/bash 0.79
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.09
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.1
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.09
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 1.27
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 0.77
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 1.53
241 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.47
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.47
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 1.13
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 0.77
245 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp 24.49
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 0.48
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.44
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 0.47
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.43
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS 0.34
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.43
252 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format 0.39
253 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL 0.43
254 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 8.3
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 1.27
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 1.29
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.04
258 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.02
259 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.02
260 TestGvisorAddon 218.32
263 TestMultiControlPlane/serial/StartCluster 210.77
264 TestMultiControlPlane/serial/DeployApp 6.49
265 TestMultiControlPlane/serial/PingHostFromPods 1.44
266 TestMultiControlPlane/serial/AddWorkerNode 48.62
267 TestMultiControlPlane/serial/NodeLabels 0.07
268 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.71
269 TestMultiControlPlane/serial/CopyFile 11.32
270 TestMultiControlPlane/serial/StopSecondaryNode 12.46
271 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.56
272 TestMultiControlPlane/serial/RestartSecondaryNode 25.8
273 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
274 TestMultiControlPlane/serial/RestartClusterKeepsNodes 168.15
275 TestMultiControlPlane/serial/DeleteSecondaryNode 6.44
276 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.56
277 TestMultiControlPlane/serial/StopCluster 38.28
278 TestMultiControlPlane/serial/RestartCluster 113.49
279 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.55
280 TestMultiControlPlane/serial/AddSecondaryNode 90.85
281 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.71
284 TestImageBuild/serial/Setup 38.51
285 TestImageBuild/serial/NormalBuild 1.56
286 TestImageBuild/serial/BuildWithBuildArg 1.02
287 TestImageBuild/serial/BuildWithDockerIgnore 0.72
288 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.91
293 TestJSONOutput/start/Command 55.78
294 TestJSONOutput/start/Audit 0
296 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
299 TestJSONOutput/pause/Command 0.66
300 TestJSONOutput/pause/Audit 0
302 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/unpause/Command 0.6
306 TestJSONOutput/unpause/Audit 0
308 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
311 TestJSONOutput/stop/Command 14.76
312 TestJSONOutput/stop/Audit 0
314 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
315 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
316 TestErrorJSONOutput 0.26
321 TestMainNoArgs 0.06
322 TestMinikubeProfile 87.3
325 TestMountStart/serial/StartWithMountFirst 20.56
326 TestMountStart/serial/VerifyMountFirst 0.31
327 TestMountStart/serial/StartWithMountSecond 21.44
328 TestMountStart/serial/VerifyMountSecond 0.31
329 TestMountStart/serial/DeleteFirst 0.74
330 TestMountStart/serial/VerifyMountPostDelete 0.32
331 TestMountStart/serial/Stop 1.29
332 TestMountStart/serial/RestartStopped 21.03
333 TestMountStart/serial/VerifyMountPostStop 0.32
336 TestMultiNode/serial/FreshStart2Nodes 110.62
337 TestMultiNode/serial/DeployApp2Nodes 5.53
338 TestMultiNode/serial/PingHostFrom2Pods 0.94
339 TestMultiNode/serial/AddNode 50.32
340 TestMultiNode/serial/MultiNodeLabels 0.07
341 TestMultiNode/serial/ProfileList 0.49
342 TestMultiNode/serial/CopyFile 6.21
343 TestMultiNode/serial/StopNode 2.42
344 TestMultiNode/serial/StartAfterStop 45.03
345 TestMultiNode/serial/RestartKeepsNodes 165.57
346 TestMultiNode/serial/DeleteNode 2.22
347 TestMultiNode/serial/StopMultiNode 25.26
348 TestMultiNode/serial/RestartMultiNode 85.25
349 TestMultiNode/serial/ValidateNameConflict 46.46
354 TestPreload 129.56
356 TestScheduledStopUnix 115.13
357 TestSkaffold 125.2
360 TestRunningBinaryUpgrade 413.28
362 TestKubernetesUpgrade 226.99
372 TestISOImage/Setup 22.4
377 TestISOImage/Binaries/crictl 0.19
378 TestISOImage/Binaries/curl 0.19
379 TestISOImage/Binaries/docker 0.18
380 TestISOImage/Binaries/git 0.21
381 TestISOImage/Binaries/iptables 0.17
382 TestISOImage/Binaries/podman 0.17
383 TestISOImage/Binaries/rsync 0.18
384 TestISOImage/Binaries/socat 0.19
385 TestISOImage/Binaries/wget 0.19
386 TestISOImage/Binaries/VBoxControl 0.18
387 TestISOImage/Binaries/VBoxService 0.19
388 TestStoppedBinaryUpgrade/Setup 3.25
389 TestStoppedBinaryUpgrade/Upgrade 164.98
391 TestPause/serial/Start 93.68
392 TestStoppedBinaryUpgrade/MinikubeLogs 2.34
400 TestPause/serial/SecondStartNoReconfiguration 87.95
401 TestPause/serial/Pause 1.11
402 TestPause/serial/VerifyStatus 0.25
403 TestPause/serial/Unpause 0.69
404 TestPause/serial/PauseAgain 0.97
405 TestPause/serial/DeletePaused 1.13
406 TestPause/serial/VerifyDeletedResources 13.57
408 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
409 TestNoKubernetes/serial/StartWithK8s 48.53
410 TestNetworkPlugins/group/auto/Start 80.11
411 TestNoKubernetes/serial/StartWithStopK8s 15.94
412 TestNoKubernetes/serial/Start 24.48
413 TestNetworkPlugins/group/kindnet/Start 86.37
414 TestNetworkPlugins/group/auto/KubeletFlags 0.18
415 TestNetworkPlugins/group/auto/NetCatPod 11.24
416 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
417 TestNoKubernetes/serial/VerifyK8sNotRunning 0.18
418 TestNoKubernetes/serial/ProfileList 1.54
419 TestNoKubernetes/serial/Stop 1.37
420 TestNoKubernetes/serial/StartNoArgs 33.23
421 TestNetworkPlugins/group/auto/DNS 0.17
422 TestNetworkPlugins/group/auto/Localhost 0.15
423 TestNetworkPlugins/group/auto/HairPin 0.15
424 TestNetworkPlugins/group/calico/Start 113.47
425 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
426 TestNetworkPlugins/group/custom-flannel/Start 88.94
427 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
428 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
429 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
430 TestNetworkPlugins/group/kindnet/DNS 0.25
431 TestNetworkPlugins/group/kindnet/Localhost 0.19
432 TestNetworkPlugins/group/kindnet/HairPin 0.26
433 TestNetworkPlugins/group/false/Start 64.73
434 TestNetworkPlugins/group/enable-default-cni/Start 90.51
435 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
436 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.27
437 TestNetworkPlugins/group/calico/ControllerPod 6.01
438 TestNetworkPlugins/group/custom-flannel/DNS 0.35
439 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
440 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
441 TestNetworkPlugins/group/calico/KubeletFlags 0.2
442 TestNetworkPlugins/group/calico/NetCatPod 11.3
443 TestNetworkPlugins/group/calico/DNS 0.24
444 TestNetworkPlugins/group/calico/Localhost 0.2
445 TestNetworkPlugins/group/calico/HairPin 0.21
446 TestNetworkPlugins/group/flannel/Start 68.27
447 TestNetworkPlugins/group/false/KubeletFlags 0.23
448 TestNetworkPlugins/group/false/NetCatPod 12.36
449 TestNetworkPlugins/group/bridge/Start 73.57
450 TestNetworkPlugins/group/false/DNS 0.23
451 TestNetworkPlugins/group/false/Localhost 0.18
452 TestNetworkPlugins/group/false/HairPin 0.17
453 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
454 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.31
455 TestNetworkPlugins/group/kubenet/Start 101.91
456 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
457 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
458 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
460 TestStartStop/group/old-k8s-version/serial/FirstStart 86.01
461 TestNetworkPlugins/group/flannel/ControllerPod 6.01
462 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
463 TestNetworkPlugins/group/flannel/NetCatPod 14.29
464 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
465 TestNetworkPlugins/group/bridge/NetCatPod 10.42
466 TestNetworkPlugins/group/flannel/DNS 0.18
467 TestNetworkPlugins/group/flannel/Localhost 0.16
468 TestNetworkPlugins/group/flannel/HairPin 0.18
469 TestNetworkPlugins/group/bridge/DNS 0.23
470 TestNetworkPlugins/group/bridge/Localhost 0.18
471 TestNetworkPlugins/group/bridge/HairPin 0.17
473 TestStartStop/group/no-preload/serial/FirstStart 71.3
475 TestStartStop/group/embed-certs/serial/FirstStart 88.68
476 TestNetworkPlugins/group/kubenet/KubeletFlags 0.23
477 TestNetworkPlugins/group/kubenet/NetCatPod 12.31
478 TestStartStop/group/old-k8s-version/serial/DeployApp 10.48
479 TestNetworkPlugins/group/kubenet/DNS 0.2
480 TestNetworkPlugins/group/kubenet/Localhost 0.18
481 TestNetworkPlugins/group/kubenet/HairPin 0.17
482 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.72
483 TestStartStop/group/old-k8s-version/serial/Stop 13.55
485 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 63.05
486 TestStartStop/group/no-preload/serial/DeployApp 9.37
487 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
488 TestStartStop/group/old-k8s-version/serial/SecondStart 67.77
489 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.07
490 TestStartStop/group/no-preload/serial/Stop 14.63
491 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
492 TestStartStop/group/no-preload/serial/SecondStart 52.26
493 TestStartStop/group/embed-certs/serial/DeployApp 9.45
494 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.27
495 TestStartStop/group/embed-certs/serial/Stop 13.27
496 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
497 TestStartStop/group/embed-certs/serial/SecondStart 55.38
498 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.42
499 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.48
500 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.33
501 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 10.01
502 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.01
503 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
504 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 46.78
505 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.09
506 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
507 TestStartStop/group/old-k8s-version/serial/Pause 3.03
508 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
510 TestStartStop/group/newest-cni/serial/FirstStart 57.45
511 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
512 TestStartStop/group/no-preload/serial/Pause 2.74
523 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 8.01
524 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.09
525 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
526 TestStartStop/group/embed-certs/serial/Pause 3.11
527 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.01
528 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
529 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
530 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.7
531 TestStartStop/group/newest-cni/serial/DeployApp 0
532 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.8
533 TestStartStop/group/newest-cni/serial/Stop 6.85
534 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
535 TestStartStop/group/newest-cni/serial/SecondStart 29.86
536 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
537 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
538 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
539 TestStartStop/group/newest-cni/serial/Pause 2.62
x
+
TestDownloadOnly/v1.28.0/json-events (22.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-689281 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-689281 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 : (22.290340625s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (22.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1217 19:20:07.437772  259985 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1217 19:20:07.437881  259985 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-255930/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-689281
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-689281: exit status 85 (77.630656ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                      ARGS                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-689281 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 │ download-only-689281 │ jenkins │ v1.37.0 │ 17 Dec 25 19:19 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 19:19:45
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 19:19:45.204222  259997 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:19:45.204364  259997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:19:45.204376  259997 out.go:374] Setting ErrFile to fd 2...
	I1217 19:19:45.204382  259997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:19:45.204623  259997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
	W1217 19:19:45.204758  259997 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22186-255930/.minikube/config/config.json: open /home/jenkins/minikube-integration/22186-255930/.minikube/config/config.json: no such file or directory
	I1217 19:19:45.205273  259997 out.go:368] Setting JSON to true
	I1217 19:19:45.206401  259997 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":3729,"bootTime":1765995456,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:19:45.206466  259997 start.go:143] virtualization: kvm guest
	I1217 19:19:45.214382  259997 out.go:99] [download-only-689281] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1217 19:19:45.214624  259997 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22186-255930/.minikube/cache/preloaded-tarball: no such file or directory
	I1217 19:19:45.214682  259997 notify.go:221] Checking for updates...
	I1217 19:19:45.219256  259997 out.go:171] MINIKUBE_LOCATION=22186
	I1217 19:19:45.227464  259997 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:19:45.232818  259997 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22186-255930/kubeconfig
	I1217 19:19:45.234355  259997 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-255930/.minikube
	I1217 19:19:45.235578  259997 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 19:19:45.238016  259997 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 19:19:45.238412  259997 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:19:45.794659  259997 out.go:99] Using the kvm2 driver based on user configuration
	I1217 19:19:45.794762  259997 start.go:309] selected driver: kvm2
	I1217 19:19:45.794770  259997 start.go:927] validating driver "kvm2" against <nil>
	I1217 19:19:45.795116  259997 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 19:19:45.795617  259997 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1217 19:19:45.796360  259997 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 19:19:45.796408  259997 cni.go:84] Creating CNI manager for ""
	I1217 19:19:45.796460  259997 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 19:19:45.796472  259997 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 19:19:45.796523  259997 start.go:353] cluster config:
	{Name:download-only-689281 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-689281 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:19:45.796727  259997 iso.go:125] acquiring lock: {Name:mkeac5b890dbb93d0e36dd357fe6f0cc980f247e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:19:45.798628  259997 out.go:99] Downloading VM boot image ...
	I1217 19:19:45.798676  259997 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22186-255930/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso
	I1217 19:19:55.992654  259997 out.go:99] Starting "download-only-689281" primary control-plane node in "download-only-689281" cluster
	I1217 19:19:55.992698  259997 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1217 19:19:56.080692  259997 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1217 19:19:56.080731  259997 cache.go:65] Caching tarball of preloaded images
	I1217 19:19:56.086431  259997 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1217 19:19:56.087887  259997 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1217 19:19:56.087911  259997 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1217 19:19:56.184212  259997 preload.go:295] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1217 19:19:56.184341  259997 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> /home/jenkins/minikube-integration/22186-255930/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-689281 host does not exist
	  To start a cluster, run: "minikube start -p download-only-689281"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-689281
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (8.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-404391 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-404391 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=docker --driver=kvm2 : (8.41551805s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (8.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1217 19:20:16.259461  259985 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
I1217 19:20:16.259524  259985 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-255930/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-404391
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-404391: exit status 85 (75.562085ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-689281 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 │ download-only-689281 │ jenkins │ v1.37.0 │ 17 Dec 25 19:19 UTC │                     │
	│ delete  │ --all                                                                                                                                           │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │ 17 Dec 25 19:20 UTC │
	│ delete  │ -p download-only-689281                                                                                                                         │ download-only-689281 │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │ 17 Dec 25 19:20 UTC │
	│ start   │ -o=json --download-only -p download-only-404391 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=docker --driver=kvm2 │ download-only-404391 │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 19:20:07
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 19:20:07.899704  260244 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:20:07.900003  260244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:20:07.900013  260244 out.go:374] Setting ErrFile to fd 2...
	I1217 19:20:07.900018  260244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:20:07.900293  260244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
	I1217 19:20:07.900851  260244 out.go:368] Setting JSON to true
	I1217 19:20:07.901783  260244 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":3752,"bootTime":1765995456,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:20:07.901848  260244 start.go:143] virtualization: kvm guest
	I1217 19:20:07.903568  260244 out.go:99] [download-only-404391] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 19:20:07.903752  260244 notify.go:221] Checking for updates...
	I1217 19:20:07.904878  260244 out.go:171] MINIKUBE_LOCATION=22186
	I1217 19:20:07.907036  260244 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:20:07.908500  260244 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22186-255930/kubeconfig
	I1217 19:20:07.913280  260244 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-255930/.minikube
	I1217 19:20:07.914733  260244 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 19:20:07.917355  260244 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 19:20:07.917661  260244 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:20:07.950577  260244 out.go:99] Using the kvm2 driver based on user configuration
	I1217 19:20:07.950631  260244 start.go:309] selected driver: kvm2
	I1217 19:20:07.950644  260244 start.go:927] validating driver "kvm2" against <nil>
	I1217 19:20:07.951100  260244 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 19:20:07.951862  260244 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1217 19:20:07.952072  260244 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 19:20:07.952125  260244 cni.go:84] Creating CNI manager for ""
	I1217 19:20:07.952200  260244 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 19:20:07.952213  260244 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 19:20:07.952274  260244 start.go:353] cluster config:
	{Name:download-only-404391 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:download-only-404391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:20:07.952421  260244 iso.go:125] acquiring lock: {Name:mkeac5b890dbb93d0e36dd357fe6f0cc980f247e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:20:07.953773  260244 out.go:99] Starting "download-only-404391" primary control-plane node in "download-only-404391" cluster
	I1217 19:20:07.953812  260244 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1217 19:20:08.569495  260244 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4
	I1217 19:20:08.569572  260244 cache.go:65] Caching tarball of preloaded images
	I1217 19:20:08.570709  260244 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1217 19:20:08.572253  260244 out.go:99] Downloading Kubernetes v1.34.3 preload ...
	I1217 19:20:08.572277  260244 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1217 19:20:08.669392  260244 preload.go:295] Got checksum from GCS API "2968966fc29eb8b579cb5fae535bf3b1"
	I1217 19:20:08.669448  260244 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4?checksum=md5:2968966fc29eb8b579cb5fae535bf3b1 -> /home/jenkins/minikube-integration/22186-255930/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-404391 host does not exist
	  To start a cluster, run: "minikube start -p download-only-404391"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-404391
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (9.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-260327 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-260327 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=docker --driver=kvm2 : (9.515398707s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (9.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1217 19:20:26.660101  259985 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
I1217 19:20:26.660146  259985 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-255930/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-260327
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-260327: exit status 85 (78.039361ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                         ARGS                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-689281 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2      │ download-only-689281 │ jenkins │ v1.37.0 │ 17 Dec 25 19:19 UTC │                     │
	│ delete  │ --all                                                                                                                                                │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │ 17 Dec 25 19:20 UTC │
	│ delete  │ -p download-only-689281                                                                                                                              │ download-only-689281 │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │ 17 Dec 25 19:20 UTC │
	│ start   │ -o=json --download-only -p download-only-404391 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=docker --driver=kvm2      │ download-only-404391 │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │ 17 Dec 25 19:20 UTC │
	│ delete  │ -p download-only-404391                                                                                                                              │ download-only-404391 │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │ 17 Dec 25 19:20 UTC │
	│ start   │ -o=json --download-only -p download-only-260327 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=docker --driver=kvm2 │ download-only-260327 │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 19:20:17
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 19:20:17.201842  260439 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:20:17.201940  260439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:20:17.201948  260439 out.go:374] Setting ErrFile to fd 2...
	I1217 19:20:17.201953  260439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:20:17.202150  260439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
	I1217 19:20:17.202665  260439 out.go:368] Setting JSON to true
	I1217 19:20:17.203462  260439 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":3761,"bootTime":1765995456,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:20:17.203524  260439 start.go:143] virtualization: kvm guest
	I1217 19:20:17.220200  260439 out.go:99] [download-only-260327] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 19:20:17.220478  260439 notify.go:221] Checking for updates...
	I1217 19:20:17.287492  260439 out.go:171] MINIKUBE_LOCATION=22186
	I1217 19:20:17.289062  260439 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:20:17.290422  260439 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22186-255930/kubeconfig
	I1217 19:20:17.291733  260439 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-255930/.minikube
	I1217 19:20:17.292958  260439 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 19:20:17.295151  260439 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 19:20:17.295465  260439 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:20:17.329900  260439 out.go:99] Using the kvm2 driver based on user configuration
	I1217 19:20:17.329939  260439 start.go:309] selected driver: kvm2
	I1217 19:20:17.329946  260439 start.go:927] validating driver "kvm2" against <nil>
	I1217 19:20:17.330274  260439 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 19:20:17.330785  260439 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1217 19:20:17.330932  260439 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 19:20:17.330960  260439 cni.go:84] Creating CNI manager for ""
	I1217 19:20:17.331014  260439 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 19:20:17.331026  260439 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 19:20:17.331077  260439 start.go:353] cluster config:
	{Name:download-only-260327 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:download-only-260327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:20:17.331167  260439 iso.go:125] acquiring lock: {Name:mkeac5b890dbb93d0e36dd357fe6f0cc980f247e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:20:17.332665  260439 out.go:99] Starting "download-only-260327" primary control-plane node in "download-only-260327" cluster
	I1217 19:20:17.332697  260439 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1217 19:20:17.871852  260439 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1217 19:20:17.871898  260439 cache.go:65] Caching tarball of preloaded images
	I1217 19:20:17.872793  260439 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1217 19:20:17.874426  260439 out.go:99] Downloading Kubernetes v1.35.0-rc.1 preload ...
	I1217 19:20:17.874461  260439 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1217 19:20:17.972674  260439 preload.go:295] Got checksum from GCS API "69672a26de652c41c080c5ec079f9718"
	I1217 19:20:17.972724  260439 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4?checksum=md5:69672a26de652c41c080c5ec079f9718 -> /home/jenkins/minikube-integration/22186-255930/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-260327 host does not exist
	  To start a cluster, run: "minikube start -p download-only-260327"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-260327
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (1.47s)

                                                
                                                
=== RUN   TestBinaryMirror
I1217 19:20:27.527097  259985 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-777374 --alsologtostderr --binary-mirror http://127.0.0.1:36177 --driver=kvm2 
aaa_download_only_test.go:309: (dbg) Done: out/minikube-linux-amd64 start --download-only -p binary-mirror-777374 --alsologtostderr --binary-mirror http://127.0.0.1:36177 --driver=kvm2 : (1.149132477s)
helpers_test.go:176: Cleaning up "binary-mirror-777374" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-777374
--- PASS: TestBinaryMirror (1.47s)

                                                
                                    
x
+
TestOffline (79.26s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-762873 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-762873 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2 : (1m18.004534048s)
helpers_test.go:176: Cleaning up "offline-docker-762873" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-762873
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-762873: (1.250901751s)
--- PASS: TestOffline (79.26s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-743931
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-743931: exit status 85 (69.640727ms)

                                                
                                                
-- stdout --
	* Profile "addons-743931" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-743931"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-743931
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-743931: exit status 85 (68.878379ms)

                                                
                                                
-- stdout --
	* Profile "addons-743931" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-743931"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (203.39s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-743931 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-743931 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m23.389442455s)
--- PASS: TestAddons/Setup (203.39s)

                                                
                                    
x
+
TestAddons/serial/Volcano (43.41s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:870: volcano-scheduler stabilized in 32.253923ms
addons_test.go:886: volcano-controller stabilized in 32.341084ms
addons_test.go:878: volcano-admission stabilized in 32.774202ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-w95ln" [e076e6f2-e478-4c1e-a417-77eabdf1e392] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004447139s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-j9l9s" [1d467321-c4cd-445b-ab5d-0cda001f0275] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00435169s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-jmnds" [72d0c70d-82c6-4165-b97c-a97e76cd0538] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004200951s
addons_test.go:905: (dbg) Run:  kubectl --context addons-743931 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-743931 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-743931 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [df9ed97e-c275-4d48-a14f-2a5ae11fa1c3] Pending
helpers_test.go:353: "test-job-nginx-0" [df9ed97e-c275-4d48-a14f-2a5ae11fa1c3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [df9ed97e-c275-4d48-a14f-2a5ae11fa1c3] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.010650961s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-743931 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-743931 addons disable volcano --alsologtostderr -v=1: (11.933469446s)
--- PASS: TestAddons/serial/Volcano (43.41s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-743931 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-743931 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.54s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-743931 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-743931 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [12b8c4ad-f55c-492d-ba55-e6059ea85ae1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [12b8c4ad-f55c-492d-ba55-e6059ea85ae1] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004395256s
addons_test.go:696: (dbg) Run:  kubectl --context addons-743931 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-743931 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-743931 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.54s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 10.597089ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-w28xx" [f97f398c-ccbe-4bfc-a822-721b580d85e0] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007409618s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-n6m8q" [486574a7-0b37-4f6f-8823-315abded49b4] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004096805s
addons_test.go:394: (dbg) Run:  kubectl --context addons-743931 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-743931 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-743931 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.297738415s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-743931 ip
2025/12/17 19:25:10 [DEBUG] GET http://192.168.39.29:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-743931 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.19s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.61s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 10.912518ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-743931
addons_test.go:334: (dbg) Run:  kubectl --context addons-743931 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-743931 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.61s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-743931 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-743931 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-743931 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [c3a35c6a-1fdc-4602-bec8-068ad001409a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [c3a35c6a-1fdc-4602-bec8-068ad001409a] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.004367017s
I1217 19:25:26.810308  259985 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-743931 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-743931 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-743931 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.39.29
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-743931 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-743931 addons disable ingress-dns --alsologtostderr -v=1: (1.963499387s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-743931 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-743931 addons disable ingress --alsologtostderr -v=1: (7.796219611s)
--- PASS: TestAddons/parallel/Ingress (18.87s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.12s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-lp79b" [6b56bbed-fd6b-4a8f-83e8-64915cb7cbb2] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004662554s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-743931 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-743931 addons disable inspektor-gadget --alsologtostderr -v=1: (6.118309413s)
--- PASS: TestAddons/parallel/InspektorGadget (12.12s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.67s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 9.156185ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-59qtl" [78573a7b-f8d9-45b8-a8af-f9608ffeeac0] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003647834s
addons_test.go:465: (dbg) Run:  kubectl --context addons-743931 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-743931 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.67s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1217 19:25:01.289187  259985 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1217 19:25:01.294690  259985 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1217 19:25:01.294728  259985 kapi.go:107] duration metric: took 5.554971ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 5.572119ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-743931 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-743931 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [b24194e9-ae65-4e38-b1d5-b8b89e9603a4] Pending
helpers_test.go:353: "task-pv-pod" [b24194e9-ae65-4e38-b1d5-b8b89e9603a4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [b24194e9-ae65-4e38-b1d5-b8b89e9603a4] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.005602206s
addons_test.go:574: (dbg) Run:  kubectl --context addons-743931 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-743931 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-743931 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-743931 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-743931 delete pod task-pv-pod: (1.412024065s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-743931 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-743931 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-743931 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [25fd1e3c-8bca-46b2-be46-9639c8e854c7] Pending
helpers_test.go:353: "task-pv-pod-restore" [25fd1e3c-8bca-46b2-be46-9639c8e854c7] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.005050883s
addons_test.go:616: (dbg) Run:  kubectl --context addons-743931 delete pod task-pv-pod-restore
addons_test.go:616: (dbg) Done: kubectl --context addons-743931 delete pod task-pv-pod-restore: (1.212432937s)
addons_test.go:620: (dbg) Run:  kubectl --context addons-743931 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-743931 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-743931 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-743931 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-743931 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.82472747s)
--- PASS: TestAddons/parallel/CSI (54.57s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-743931 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-zbvrc" [6fa44b66-82e7-4de4-8202-5f4465062cb4] Pending
helpers_test.go:353: "headlamp-dfcdc64b-zbvrc" [6fa44b66-82e7-4de4-8202-5f4465062cb4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-zbvrc" [6fa44b66-82e7-4de4-8202-5f4465062cb4] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-zbvrc" [6fa44b66-82e7-4de4-8202-5f4465062cb4] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.07464661s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-743931 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-743931 addons disable headlamp --alsologtostderr -v=1: (5.763344549s)
--- PASS: TestAddons/parallel/Headlamp (20.78s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.79s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-tc8zv" [83c8e1f0-5da1-4a7d-85e6-dbd6725519ed] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005597568s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-743931 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.79s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.74s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-743931 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-743931 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-743931 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [809ec3f7-a615-4f09-9226-de1dc3e5a75f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [809ec3f7-a615-4f09-9226-de1dc3e5a75f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [809ec3f7-a615-4f09-9226-de1dc3e5a75f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.005170834s
addons_test.go:969: (dbg) Run:  kubectl --context addons-743931 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-743931 ssh "cat /opt/local-path-provisioner/pvc-31fdec46-7cdf-41a8-b018-37e4a2b9ae12_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-743931 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-743931 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-743931 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-743931 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.858724591s)
--- PASS: TestAddons/parallel/LocalPath (56.74s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.38s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-l8mq6" [67dad672-8854-4759-804d-f1123109f5c9] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00516168s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-743931 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.38s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-tkbnd" [bd6d568c-6df4-4425-8a61-53b76874fb36] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006313625s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-743931 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-743931 addons disable yakd --alsologtostderr -v=1: (6.041768584s)
--- PASS: TestAddons/parallel/Yakd (12.05s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (14.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-743931
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-743931: (14.212450208s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-743931
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-743931
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-743931
--- PASS: TestAddons/StoppedEnableDisable (14.43s)

                                                
                                    
x
+
TestCertOptions (46.54s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-192361 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-192361 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (44.461052755s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-192361 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-192361 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-192361 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-192361" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-192361
E1217 20:18:07.644266  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/gvisor-811570/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-192361: (1.66137775s)
--- PASS: TestCertOptions (46.54s)

                                                
                                    
x
+
TestCertExpiration (309.28s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-868391 --memory=3072 --cert-expiration=3m --driver=kvm2 
E1217 20:16:12.461973  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/skaffold-444067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-868391 --memory=3072 --cert-expiration=3m --driver=kvm2 : (47.12393193s)
E1217 20:17:13.906380  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/skaffold-444067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-868391 --memory=3072 --cert-expiration=8760h --driver=kvm2 
E1217 20:20:10.407840  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-868391 --memory=3072 --cert-expiration=8760h --driver=kvm2 : (1m20.888103569s)
helpers_test.go:176: Cleaning up "cert-expiration-868391" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-868391
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-868391: (1.269142323s)
--- PASS: TestCertExpiration (309.28s)

                                                
                                    
x
+
TestDockerFlags (62.37s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-899089 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-899089 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m1.050944774s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-899089 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-899089 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-899089" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-899089
--- PASS: TestDockerFlags (62.37s)

                                                
                                    
x
+
TestForceSystemdFlag (89.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-878200 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-878200 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m27.790708575s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-878200 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-flag-878200" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-878200
--- PASS: TestForceSystemdFlag (89.05s)

                                                
                                    
x
+
TestForceSystemdEnv (110.97s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-556043 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-556043 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (1m49.66800621s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-556043 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-556043" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-556043
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-556043: (1.040106595s)
--- PASS: TestForceSystemdEnv (110.97s)

                                                
                                    
x
+
TestErrorSpam/setup (40.32s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-255780 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-255780 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-255780 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-255780 --driver=kvm2 : (40.320293907s)
--- PASS: TestErrorSpam/setup (40.32s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-255780 --log_dir /tmp/nospam-255780 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-255780 --log_dir /tmp/nospam-255780 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-255780 --log_dir /tmp/nospam-255780 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-255780 --log_dir /tmp/nospam-255780 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-255780 --log_dir /tmp/nospam-255780 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-255780 --log_dir /tmp/nospam-255780 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.29s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-255780 --log_dir /tmp/nospam-255780 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-255780 --log_dir /tmp/nospam-255780 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-255780 --log_dir /tmp/nospam-255780 pause
--- PASS: TestErrorSpam/pause (1.29s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-255780 --log_dir /tmp/nospam-255780 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-255780 --log_dir /tmp/nospam-255780 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-255780 --log_dir /tmp/nospam-255780 unpause
--- PASS: TestErrorSpam/unpause (1.51s)

                                                
                                    
x
+
TestErrorSpam/stop (5.86s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-255780 --log_dir /tmp/nospam-255780 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-255780 --log_dir /tmp/nospam-255780 stop: (2.686825478s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-255780 --log_dir /tmp/nospam-255780 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-255780 --log_dir /tmp/nospam-255780 stop: (1.658598184s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-255780 --log_dir /tmp/nospam-255780 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-255780 --log_dir /tmp/nospam-255780 stop: (1.518715337s)
--- PASS: TestErrorSpam/stop (5.86s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22186-255930/.minikube/files/etc/test/nested/copy/259985/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (56.78s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-750489 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-750489 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2 : (56.780552571s)
--- PASS: TestFunctional/serial/StartWithProxy (56.78s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (61.91s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1217 19:27:58.539496  259985 config.go:182] Loaded profile config "functional-750489": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-750489 --alsologtostderr -v=8
E1217 19:28:53.567973  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:28:53.574407  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:28:53.585699  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:28:53.607566  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:28:53.649087  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:28:53.730650  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:28:53.892282  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:28:54.214641  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:28:54.856998  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:28:56.138634  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:28:58.700901  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-750489 --alsologtostderr -v=8: (1m1.906973354s)
functional_test.go:678: soft start took 1m1.907606535s for "functional-750489" cluster.
I1217 19:29:00.446824  259985 config.go:182] Loaded profile config "functional-750489": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (61.91s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-750489 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-750489 cache add registry.k8s.io/pause:3.1: (1.318339107s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-750489 cache add registry.k8s.io/pause:3.3: (1.39926097s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 cache add registry.k8s.io/pause:latest
E1217 19:29:03.822823  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-750489 cache add registry.k8s.io/pause:latest: (1.329863783s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-750489 /tmp/TestFunctionalserialCacheCmdcacheadd_local1678515961/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 cache add minikube-local-cache-test:functional-750489
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-750489 cache add minikube-local-cache-test:functional-750489: (1.614828136s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 cache delete minikube-local-cache-test:functional-750489
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-750489
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-750489 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (182.164546ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-750489 cache reload: (1.059215933s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 kubectl -- --context functional-750489 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-750489 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (53.85s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-750489 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1217 19:29:14.064343  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:29:34.546464  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-750489 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (53.845694047s)
functional_test.go:776: restart took 53.845833823s for "functional-750489" cluster.
I1217 19:30:02.816362  259985 config.go:182] Loaded profile config "functional-750489": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (53.85s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-750489 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-750489 logs: (1.053966539s)
--- PASS: TestFunctional/serial/LogsCmd (1.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 logs --file /tmp/TestFunctionalserialLogsFileCmd1203778994/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-750489 logs --file /tmp/TestFunctionalserialLogsFileCmd1203778994/001/logs.txt: (1.05317017s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.05s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.96s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-750489 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-750489
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-750489: exit status 115 (338.272864ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.209:30765 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-750489 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-750489 config get cpus: exit status 14 (82.133983ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-750489 config get cpus: exit status 14 (79.017336ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-750489 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-750489 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 265490: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.84s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-750489 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-750489 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (163.901681ms)

                                                
                                                
-- stdout --
	* [functional-750489] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-255930/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-255930/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:30:09.691045  265276 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:30:09.691337  265276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:30:09.691347  265276 out.go:374] Setting ErrFile to fd 2...
	I1217 19:30:09.691354  265276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:30:09.691689  265276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
	I1217 19:30:09.692199  265276 out.go:368] Setting JSON to false
	I1217 19:30:09.693473  265276 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4354,"bootTime":1765995456,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:30:09.693555  265276 start.go:143] virtualization: kvm guest
	I1217 19:30:09.696844  265276 out.go:179] * [functional-750489] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 19:30:09.698751  265276 notify.go:221] Checking for updates...
	I1217 19:30:09.699227  265276 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 19:30:09.701009  265276 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:30:09.702971  265276 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-255930/kubeconfig
	I1217 19:30:09.704111  265276 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-255930/.minikube
	I1217 19:30:09.705224  265276 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 19:30:09.707436  265276 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 19:30:09.709356  265276 config.go:182] Loaded profile config "functional-750489": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1217 19:30:09.710108  265276 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:30:09.758138  265276 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 19:30:09.759471  265276 start.go:309] selected driver: kvm2
	I1217 19:30:09.759491  265276 start.go:927] validating driver "kvm2" against &{Name:functional-750489 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-750489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:30:09.759634  265276 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 19:30:09.762792  265276 out.go:203] 
	W1217 19:30:09.764208  265276 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 19:30:09.765363  265276 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-750489 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-750489 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-750489 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (159.731427ms)

                                                
                                                
-- stdout --
	* [functional-750489] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-255930/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-255930/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:30:09.534434  265251 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:30:09.534586  265251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:30:09.534610  265251 out.go:374] Setting ErrFile to fd 2...
	I1217 19:30:09.534617  265251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:30:09.535094  265251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
	I1217 19:30:09.535804  265251 out.go:368] Setting JSON to false
	I1217 19:30:09.537047  265251 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4354,"bootTime":1765995456,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:30:09.537126  265251 start.go:143] virtualization: kvm guest
	I1217 19:30:09.540928  265251 out.go:179] * [functional-750489] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 19:30:09.542786  265251 notify.go:221] Checking for updates...
	I1217 19:30:09.542826  265251 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 19:30:09.544019  265251 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:30:09.545836  265251 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-255930/kubeconfig
	I1217 19:30:09.547146  265251 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-255930/.minikube
	I1217 19:30:09.548720  265251 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 19:30:09.550181  265251 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 19:30:09.552210  265251 config.go:182] Loaded profile config "functional-750489": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1217 19:30:09.553050  265251 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:30:09.594733  265251 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1217 19:30:09.596231  265251 start.go:309] selected driver: kvm2
	I1217 19:30:09.596249  265251 start.go:927] validating driver "kvm2" against &{Name:functional-750489 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-750489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:30:09.596392  265251 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 19:30:09.598558  265251 out.go:203] 
	W1217 19:30:09.599823  265251 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 19:30:09.601156  265251 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-750489 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-750489 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-mbkfz" [aa68260b-1873-4683-9774-aa2c5084abee] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-mbkfz" [aa68260b-1873-4683-9774-aa2c5084abee] Running
I1217 19:30:16.719424  259985 retry.go:31] will retry after 1.833655186s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:87a31efc-6c67-41d5-9149-b3cbcadd053f ResourceVersion:781 Generation:0 CreationTimestamp:2025-12-17 19:30:16 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0015eea90 VolumeMode:0xc0015eeaa0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.007694744s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.209:30167
functional_test.go:1680: http://192.168.39.209:30167: success! body:
Request served by hello-node-connect-7d85dfc575-mbkfz

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.209:30167
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.49s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (35.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [0afcec07-061e-4f2e-83e3-565cd048e2d5] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00485584s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-750489 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-750489 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-750489 get pvc myclaim -o=json
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-750489 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-750489 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [f790308c-8603-421d-8d01-5a51c6f26f18] Pending
helpers_test.go:353: "sp-pod" [f790308c-8603-421d-8d01-5a51c6f26f18] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [f790308c-8603-421d-8d01-5a51c6f26f18] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.004779046s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-750489 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-750489 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-750489 apply -f testdata/storage-provisioner/pod.yaml
I1217 19:30:38.062862  259985 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [10bfc84b-cd97-4624-859c-049aeb15858e] Pending
helpers_test.go:353: "sp-pod" [10bfc84b-cd97-4624-859c-049aeb15858e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [10bfc84b-cd97-4624-859c-049aeb15858e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.00587332s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-750489 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (35.86s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh -n functional-750489 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 cp functional-750489:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2127169104/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh -n functional-750489 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh -n functional-750489 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (43.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-750489 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-nrmt7" [62edd3b9-ddab-4abc-81b1-f5295cf87d87] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-nrmt7" [62edd3b9-ddab-4abc-81b1-f5295cf87d87] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 33.005577196s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-750489 exec mysql-6bcdcbc558-nrmt7 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-750489 exec mysql-6bcdcbc558-nrmt7 -- mysql -ppassword -e "show databases;": exit status 1 (209.769686ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 19:30:57.464455  259985 retry.go:31] will retry after 1.357488324s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-750489 exec mysql-6bcdcbc558-nrmt7 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-750489 exec mysql-6bcdcbc558-nrmt7 -- mysql -ppassword -e "show databases;": exit status 1 (199.27322ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 19:30:59.022260  259985 retry.go:31] will retry after 1.350558228s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-750489 exec mysql-6bcdcbc558-nrmt7 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-750489 exec mysql-6bcdcbc558-nrmt7 -- mysql -ppassword -e "show databases;": exit status 1 (219.609249ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 19:31:00.593015  259985 retry.go:31] will retry after 2.00640297s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-750489 exec mysql-6bcdcbc558-nrmt7 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-750489 exec mysql-6bcdcbc558-nrmt7 -- mysql -ppassword -e "show databases;": exit status 1 (139.963875ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 19:31:02.740736  259985 retry.go:31] will retry after 4.251440869s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-750489 exec mysql-6bcdcbc558-nrmt7 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (43.13s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/259985/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh "sudo cat /etc/test/nested/copy/259985/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/259985.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh "sudo cat /etc/ssl/certs/259985.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/259985.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh "sudo cat /usr/share/ca-certificates/259985.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2599852.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh "sudo cat /etc/ssl/certs/2599852.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2599852.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh "sudo cat /usr/share/ca-certificates/2599852.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-750489 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-750489 ssh "sudo systemctl is-active crio": exit status 1 (198.456013ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2293: (dbg) Done: out/minikube-linux-amd64 license: (1.011215776s)
--- PASS: TestFunctional/parallel/License (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-750489 /tmp/TestFunctionalparallelMountCmdany-port4068524362/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765999808959147383" to /tmp/TestFunctionalparallelMountCmdany-port4068524362/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765999808959147383" to /tmp/TestFunctionalparallelMountCmdany-port4068524362/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765999808959147383" to /tmp/TestFunctionalparallelMountCmdany-port4068524362/001/test-1765999808959147383
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-750489 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (215.459161ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 19:30:09.175010  259985 retry.go:31] will retry after 501.651371ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 19:30 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 19:30 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 19:30 test-1765999808959147383
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh cat /mount-9p/test-1765999808959147383
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-750489 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [877500e7-298e-4f65-8c8c-f60515373dc2] Pending
helpers_test.go:353: "busybox-mount" [877500e7-298e-4f65-8c8c-f60515373dc2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [877500e7-298e-4f65-8c8c-f60515373dc2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E1217 19:30:15.508479  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox-mount" [877500e7-298e-4f65-8c8c-f60515373dc2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.048509101s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-750489 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-750489 /tmp/TestFunctionalparallelMountCmdany-port4068524362/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "321.882448ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "86.567609ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "361.260904ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "73.912613ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-750489 /tmp/TestFunctionalparallelMountCmdspecific-port2455072383/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-750489 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (183.698124ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 19:30:17.527693  259985 retry.go:31] will retry after 648.791066ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-750489 /tmp/TestFunctionalparallelMountCmdspecific-port2455072383/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh "sudo umount -f /mount-9p"
I1217 19:30:18.767129  259985 detect.go:223] nested VM detected
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-750489 ssh "sudo umount -f /mount-9p": exit status 1 (193.531982ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-750489 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-750489 /tmp/TestFunctionalparallelMountCmdspecific-port2455072383/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-750489 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3738107138/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-750489 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3738107138/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-750489 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3738107138/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-750489 ssh "findmnt -T" /mount1: exit status 1 (224.457669ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 19:30:19.175861  259985 retry.go:31] will retry after 374.602853ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-750489 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-750489 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3738107138/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-750489 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3738107138/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-750489 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3738107138/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-750489 docker-env) && out/minikube-linux-amd64 status -p functional-750489"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-750489 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-750489 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-750489
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-750489
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-750489 image ls --format short --alsologtostderr:
I1217 19:30:30.475976  266329 out.go:360] Setting OutFile to fd 1 ...
I1217 19:30:30.476108  266329 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:30:30.476118  266329 out.go:374] Setting ErrFile to fd 2...
I1217 19:30:30.476122  266329 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:30:30.476339  266329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
I1217 19:30:30.476933  266329 config.go:182] Loaded profile config "functional-750489": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1217 19:30:30.477027  266329 config.go:182] Loaded profile config "functional-750489": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1217 19:30:30.479160  266329 ssh_runner.go:195] Run: systemctl --version
I1217 19:30:30.481376  266329 main.go:143] libmachine: domain functional-750489 has defined MAC address 52:54:00:45:74:2c in network mk-functional-750489
I1217 19:30:30.481795  266329 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:74:2c", ip: ""} in network mk-functional-750489: {Iface:virbr1 ExpiryTime:2025-12-17 20:27:16 +0000 UTC Type:0 Mac:52:54:00:45:74:2c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-750489 Clientid:01:52:54:00:45:74:2c}
I1217 19:30:30.481819  266329 main.go:143] libmachine: domain functional-750489 has defined IP address 192.168.39.209 and MAC address 52:54:00:45:74:2c in network mk-functional-750489
I1217 19:30:30.481988  266329 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/functional-750489/id_rsa Username:docker}
I1217 19:30:30.571795  266329 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-750489 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ docker.io/kubernetesui/dashboard            │ <none>            │ 07655ddf2eebe │ 246MB  │
│ docker.io/kubernetesui/metrics-scraper      │ <none>            │ 115053965e86b │ 43.8MB │
│ localhost/my-image                          │ functional-750489 │ 658560b65094e │ 1.24MB │
│ docker.io/library/minikube-local-cache-test │ functional-750489 │ 5b8d74f1aec09 │ 30B    │
│ public.ecr.aws/nginx/nginx                  │ alpine            │ a236f84b9d5d2 │ 53.7MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.3           │ aa27095f56193 │ 88MB   │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ docker.io/kicbase/echo-server               │ functional-750489 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.3           │ 5826b25d990d7 │ 74.9MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.3           │ 36eef8e07bdd6 │ 71.9MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.3           │ aec12dadf56dd │ 52.8MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-750489 image ls --format table --alsologtostderr:
I1217 19:30:35.819806  266411 out.go:360] Setting OutFile to fd 1 ...
I1217 19:30:35.820147  266411 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:30:35.820161  266411 out.go:374] Setting ErrFile to fd 2...
I1217 19:30:35.820169  266411 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:30:35.820500  266411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
I1217 19:30:35.821465  266411 config.go:182] Loaded profile config "functional-750489": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1217 19:30:35.821652  266411 config.go:182] Loaded profile config "functional-750489": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1217 19:30:35.824501  266411 ssh_runner.go:195] Run: systemctl --version
I1217 19:30:35.827098  266411 main.go:143] libmachine: domain functional-750489 has defined MAC address 52:54:00:45:74:2c in network mk-functional-750489
I1217 19:30:35.827685  266411 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:74:2c", ip: ""} in network mk-functional-750489: {Iface:virbr1 ExpiryTime:2025-12-17 20:27:16 +0000 UTC Type:0 Mac:52:54:00:45:74:2c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-750489 Clientid:01:52:54:00:45:74:2c}
I1217 19:30:35.827725  266411 main.go:143] libmachine: domain functional-750489 has defined IP address 192.168.39.209 and MAC address 52:54:00:45:74:2c in network mk-functional-750489
I1217 19:30:35.827902  266411 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/functional-750489/id_rsa Username:docker}
I1217 19:30:35.915514  266411 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-750489 image ls --format json --alsologtostderr:
[{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-750489","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":[],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"53700000"},{"id":"aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.3"],"size":"88000000"},{"
id":"36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.3"],"size":"71900000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"5b8d74f1aec09b0d1db233b75847565d3c4b36b288e0335030106f8935c4f9b4","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-750489"],"size":"30"},{"id":"5826b25d990d7d314d236c8d128f43e443583891
f5cdffa7bf8bca50ae9e0942","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"74900000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"658560b65094e11ca1407fc5248cf471529af1486433aba5703e1945171c7fe1","repoDigests":[],"repoTags":["localhost/my-image:functional-750489"],"size":"1240000"},{"id":"aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.3"],"size":"52800000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-750489 image ls --format json --alsologtostderr:
I1217 19:30:35.620679  266400 out.go:360] Setting OutFile to fd 1 ...
I1217 19:30:35.620977  266400 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:30:35.620988  266400 out.go:374] Setting ErrFile to fd 2...
I1217 19:30:35.620992  266400 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:30:35.621230  266400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
I1217 19:30:35.622069  266400 config.go:182] Loaded profile config "functional-750489": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1217 19:30:35.622181  266400 config.go:182] Loaded profile config "functional-750489": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1217 19:30:35.624629  266400 ssh_runner.go:195] Run: systemctl --version
I1217 19:30:35.627347  266400 main.go:143] libmachine: domain functional-750489 has defined MAC address 52:54:00:45:74:2c in network mk-functional-750489
I1217 19:30:35.627864  266400 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:74:2c", ip: ""} in network mk-functional-750489: {Iface:virbr1 ExpiryTime:2025-12-17 20:27:16 +0000 UTC Type:0 Mac:52:54:00:45:74:2c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-750489 Clientid:01:52:54:00:45:74:2c}
I1217 19:30:35.627909  266400 main.go:143] libmachine: domain functional-750489 has defined IP address 192.168.39.209 and MAC address 52:54:00:45:74:2c in network mk-functional-750489
I1217 19:30:35.628090  266400 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/functional-750489/id_rsa Username:docker}
I1217 19:30:35.717106  266400 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-750489 image ls --format yaml --alsologtostderr:
- id: 5b8d74f1aec09b0d1db233b75847565d3c4b36b288e0335030106f8935c4f9b4
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-750489
size: "30"
- id: aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "52800000"
- id: 36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "71900000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests: []
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "53700000"
- id: 5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "74900000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "88000000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-750489
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-750489 image ls --format yaml --alsologtostderr:
I1217 19:30:30.668014  266340 out.go:360] Setting OutFile to fd 1 ...
I1217 19:30:30.668326  266340 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:30:30.668338  266340 out.go:374] Setting ErrFile to fd 2...
I1217 19:30:30.668342  266340 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:30:30.668610  266340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
I1217 19:30:30.669215  266340 config.go:182] Loaded profile config "functional-750489": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1217 19:30:30.669327  266340 config.go:182] Loaded profile config "functional-750489": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1217 19:30:30.671884  266340 ssh_runner.go:195] Run: systemctl --version
I1217 19:30:30.674452  266340 main.go:143] libmachine: domain functional-750489 has defined MAC address 52:54:00:45:74:2c in network mk-functional-750489
I1217 19:30:30.675048  266340 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:74:2c", ip: ""} in network mk-functional-750489: {Iface:virbr1 ExpiryTime:2025-12-17 20:27:16 +0000 UTC Type:0 Mac:52:54:00:45:74:2c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-750489 Clientid:01:52:54:00:45:74:2c}
I1217 19:30:30.675080  266340 main.go:143] libmachine: domain functional-750489 has defined IP address 192.168.39.209 and MAC address 52:54:00:45:74:2c in network mk-functional-750489
I1217 19:30:30.675302  266340 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/functional-750489/id_rsa Username:docker}
I1217 19:30:30.765628  266340 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-750489 ssh pgrep buildkitd: exit status 1 (164.670458ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 image build -t localhost/my-image:functional-750489 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-750489 image build -t localhost/my-image:functional-750489 testdata/build --alsologtostderr: (4.371835107s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-750489 image build -t localhost/my-image:functional-750489 testdata/build --alsologtostderr:
I1217 19:30:31.023242  266362 out.go:360] Setting OutFile to fd 1 ...
I1217 19:30:31.023395  266362 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:30:31.023404  266362 out.go:374] Setting ErrFile to fd 2...
I1217 19:30:31.023408  266362 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:30:31.023586  266362 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
I1217 19:30:31.024161  266362 config.go:182] Loaded profile config "functional-750489": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1217 19:30:31.024890  266362 config.go:182] Loaded profile config "functional-750489": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1217 19:30:31.027200  266362 ssh_runner.go:195] Run: systemctl --version
I1217 19:30:31.029338  266362 main.go:143] libmachine: domain functional-750489 has defined MAC address 52:54:00:45:74:2c in network mk-functional-750489
I1217 19:30:31.029755  266362 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:74:2c", ip: ""} in network mk-functional-750489: {Iface:virbr1 ExpiryTime:2025-12-17 20:27:16 +0000 UTC Type:0 Mac:52:54:00:45:74:2c Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-750489 Clientid:01:52:54:00:45:74:2c}
I1217 19:30:31.029790  266362 main.go:143] libmachine: domain functional-750489 has defined IP address 192.168.39.209 and MAC address 52:54:00:45:74:2c in network mk-functional-750489
I1217 19:30:31.029955  266362 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/functional-750489/id_rsa Username:docker}
I1217 19:30:31.116250  266362 build_images.go:162] Building image from path: /tmp/build.3096348981.tar
I1217 19:30:31.116331  266362 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 19:30:31.129723  266362 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3096348981.tar
I1217 19:30:31.135753  266362 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3096348981.tar: stat -c "%s %y" /var/lib/minikube/build/build.3096348981.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3096348981.tar': No such file or directory
I1217 19:30:31.135812  266362 ssh_runner.go:362] scp /tmp/build.3096348981.tar --> /var/lib/minikube/build/build.3096348981.tar (3072 bytes)
I1217 19:30:31.171339  266362 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3096348981
I1217 19:30:31.184984  266362 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3096348981 -xf /var/lib/minikube/build/build.3096348981.tar
I1217 19:30:31.196328  266362 docker.go:361] Building image: /var/lib/minikube/build/build.3096348981
I1217 19:30:31.196416  266362 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-750489 /var/lib/minikube/build/build.3096348981
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B 0.1s done
#3 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#4 DONE 0.1s

                                                
                                                
#5 [internal] load build context
#5 transferring context: 62B 0.1s done
#5 DONE 0.2s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#4 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#4 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#4 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.6s
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.7s done
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#4 DONE 1.0s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:658560b65094e11ca1407fc5248cf471529af1486433aba5703e1945171c7fe1 done
#8 naming to localhost/my-image:functional-750489 done
#8 DONE 0.1s
I1217 19:30:35.282712  266362 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-750489 /var/lib/minikube/build/build.3096348981: (4.086241545s)
I1217 19:30:35.282811  266362 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3096348981
I1217 19:30:35.309066  266362 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3096348981.tar
I1217 19:30:35.326188  266362 build_images.go:218] Built localhost/my-image:functional-750489 from /tmp/build.3096348981.tar
I1217 19:30:35.326266  266362 build_images.go:134] succeeded building to: functional-750489
I1217 19:30:35.326274  266362 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.846600096s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-750489
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 image load --daemon kicbase/echo-server:functional-750489 --alsologtostderr
2025/12/17 19:30:23 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 image load --daemon kicbase/echo-server:functional-750489 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-750489
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 image load --daemon kicbase/echo-server:functional-750489 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (33.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-750489 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-750489 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-qdcvn" [c14e27fd-258d-41a7-88c4-7a47bc45b79d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-qdcvn" [c14e27fd-258d-41a7-88c4-7a47bc45b79d] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 33.008372468s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (33.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 image save kicbase/echo-server:functional-750489 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 image rm kicbase/echo-server:functional-750489 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-750489
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 image save --daemon kicbase/echo-server:functional-750489 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-750489
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-750489 service list: (1.232989632s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-750489 service list -o json: (1.241033929s)
functional_test.go:1504: Took "1.241156543s" to run "out/minikube-linux-amd64 -p functional-750489 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.209:32390
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-750489 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.209:32390
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.26s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-750489
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-750489
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-750489
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22186-255930/.minikube/files/etc/test/nested/copy/259985/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (51.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-240388 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --kubernetes-version=v1.35.0-rc.1
E1217 19:31:37.431799  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-240388 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --kubernetes-version=v1.35.0-rc.1: (51.372552749s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (51.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (58.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1217 19:31:59.646491  259985 config.go:182] Loaded profile config "functional-240388": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-240388 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-240388 --alsologtostderr -v=8: (58.071551687s)
functional_test.go:678: soft start took 58.071901058s for "functional-240388" cluster.
I1217 19:32:57.718413  259985 config.go:182] Loaded profile config "functional-240388": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (58.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-240388 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-240388 cache add registry.k8s.io/pause:3.1: (1.33290954s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-240388 cache add registry.k8s.io/pause:3.3: (1.382859394s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-240388 cache add registry.k8s.io/pause:latest: (1.274252615s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-240388 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC795983841/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 cache add minikube-local-cache-test:functional-240388
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-240388 cache add minikube-local-cache-test:functional-240388: (1.618445749s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 cache delete minikube-local-cache-test:functional-240388
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-240388
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-240388 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (179.440705ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-240388 cache reload: (1.066766742s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 kubectl -- --context functional-240388 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-240388 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (0.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (0.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (0.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi4146955492/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (0.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-240388 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-240388
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-240388: exit status 115 (240.885054ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.22:32044 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-240388 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-240388 config get cpus: exit status 14 (65.980525ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-240388 config get cpus: exit status 14 (71.885616ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (16.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-240388 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-240388 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 269994: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (16.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-240388 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --kubernetes-version=v1.35.0-rc.1
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-240388 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --kubernetes-version=v1.35.0-rc.1: exit status 23 (138.087382ms)

                                                
                                                
-- stdout --
	* [functional-240388] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-255930/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-255930/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:38:35.995220  269923 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:38:35.995542  269923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:38:35.995556  269923 out.go:374] Setting ErrFile to fd 2...
	I1217 19:38:35.995564  269923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:38:35.995908  269923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
	I1217 19:38:35.996638  269923 out.go:368] Setting JSON to false
	I1217 19:38:35.997947  269923 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4860,"bootTime":1765995456,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:38:35.998041  269923 start.go:143] virtualization: kvm guest
	I1217 19:38:35.999468  269923 out.go:179] * [functional-240388] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 19:38:36.000870  269923 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 19:38:36.000880  269923 notify.go:221] Checking for updates...
	I1217 19:38:36.004064  269923 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:38:36.005742  269923 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-255930/kubeconfig
	I1217 19:38:36.007095  269923 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-255930/.minikube
	I1217 19:38:36.008559  269923 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 19:38:36.009996  269923 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 19:38:36.012197  269923 config.go:182] Loaded profile config "functional-240388": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1217 19:38:36.013000  269923 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:38:36.048903  269923 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 19:38:36.050268  269923 start.go:309] selected driver: kvm2
	I1217 19:38:36.050291  269923 start.go:927] validating driver "kvm2" against &{Name:functional-240388 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-240388 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:38:36.050394  269923 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 19:38:36.052488  269923 out.go:203] 
	W1217 19:38:36.053944  269923 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 19:38:36.055045  269923 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-240388 --dry-run --alsologtostderr -v=1 --driver=kvm2  --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-240388 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --kubernetes-version=v1.35.0-rc.1
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-240388 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --kubernetes-version=v1.35.0-rc.1: exit status 23 (142.971173ms)

                                                
                                                
-- stdout --
	* [functional-240388] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-255930/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-255930/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:38:35.849164  269907 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:38:35.849267  269907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:38:35.849275  269907 out.go:374] Setting ErrFile to fd 2...
	I1217 19:38:35.849281  269907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:38:35.849638  269907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
	I1217 19:38:35.850102  269907 out.go:368] Setting JSON to false
	I1217 19:38:35.851006  269907 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4860,"bootTime":1765995456,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:38:35.851076  269907 start.go:143] virtualization: kvm guest
	I1217 19:38:35.853013  269907 out.go:179] * [functional-240388] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 19:38:35.854460  269907 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 19:38:35.854486  269907 notify.go:221] Checking for updates...
	I1217 19:38:35.857396  269907 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:38:35.859205  269907 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-255930/kubeconfig
	I1217 19:38:35.860776  269907 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-255930/.minikube
	I1217 19:38:35.862035  269907 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 19:38:35.863404  269907 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 19:38:35.865402  269907 config.go:182] Loaded profile config "functional-240388": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1217 19:38:35.866264  269907 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:38:35.903920  269907 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1217 19:38:35.906013  269907 start.go:309] selected driver: kvm2
	I1217 19:38:35.906041  269907 start.go:927] validating driver "kvm2" against &{Name:functional-240388 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-240388 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:38:35.906244  269907 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 19:38:35.908950  269907 out.go:203] 
	W1217 19:38:35.910998  269907 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 19:38:35.912466  269907 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (1.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (1.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (29.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-240388 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-240388 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-wwpcs" [e578f59c-137e-4155-8d23-a8a628cda69b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-wwpcs" [e578f59c-137e-4155-8d23-a8a628cda69b] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 29.004236816s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.22:32249
functional_test.go:1680: http://192.168.39.22:32249: success! body:
Request served by hello-node-connect-9f67c86d4-wwpcs

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.22:32249
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (29.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (42.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [377236c5-a7a8-4bb5-834d-3140d3393035] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.438430046s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-240388 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-240388 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-240388 get pvc myclaim -o=json
I1217 19:38:08.742826  259985 retry.go:31] will retry after 2.894309777s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:5fd17218-6626-4efd-8e27-9334b16c0f47 ResourceVersion:978 Generation:0 CreationTimestamp:2025-12-17 19:38:08 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc00178a000 VolumeMode:0xc00178a010 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-240388 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-240388 apply -f testdata/storage-provisioner/pod.yaml
I1217 19:38:11.835486  259985 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [6a4e5352-6abb-45c5-81e8-c77056182c3c] Pending
helpers_test.go:353: "sp-pod" [6a4e5352-6abb-45c5-81e8-c77056182c3c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [6a4e5352-6abb-45c5-81e8-c77056182c3c] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.006303794s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-240388 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-240388 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-240388 delete -f testdata/storage-provisioner/pod.yaml: (1.665123819s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-240388 apply -f testdata/storage-provisioner/pod.yaml
I1217 19:38:38.886968  259985 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [d37fde8b-53ae-4f7e-8705-e3202a8811d1] Pending
helpers_test.go:353: "sp-pod" [d37fde8b-53ae-4f7e-8705-e3202a8811d1] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.049396026s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-240388 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (42.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh -n functional-240388 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 cp functional-240388:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm4284289030/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh -n functional-240388 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh -n functional-240388 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (38.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-240388 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-lqxdh" [4db88b4e-1050-4625-9333-180428d1c24a] Pending
helpers_test.go:353: "mysql-7d7b65bc95-lqxdh" [4db88b4e-1050-4625-9333-180428d1c24a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-lqxdh" [4db88b4e-1050-4625-9333-180428d1c24a] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: app=mysql healthy within 29.005624051s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-240388 exec mysql-7d7b65bc95-lqxdh -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-240388 exec mysql-7d7b65bc95-lqxdh -- mysql -ppassword -e "show databases;": exit status 1 (209.050204ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 19:38:31.301507  259985 retry.go:31] will retry after 1.132414379s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-240388 exec mysql-7d7b65bc95-lqxdh -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-240388 exec mysql-7d7b65bc95-lqxdh -- mysql -ppassword -e "show databases;": exit status 1 (238.522727ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 19:38:32.673746  259985 retry.go:31] will retry after 2.065835571s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-240388 exec mysql-7d7b65bc95-lqxdh -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-240388 exec mysql-7d7b65bc95-lqxdh -- mysql -ppassword -e "show databases;": exit status 1 (291.923175ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 19:38:35.032133  259985 retry.go:31] will retry after 2.63124912s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-240388 exec mysql-7d7b65bc95-lqxdh -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-240388 exec mysql-7d7b65bc95-lqxdh -- mysql -ppassword -e "show databases;": exit status 1 (200.896468ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 19:38:37.865492  259985 retry.go:31] will retry after 1.95842474s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-240388 exec mysql-7d7b65bc95-lqxdh -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (38.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/259985/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh "sudo cat /etc/test/nested/copy/259985/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/259985.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh "sudo cat /etc/ssl/certs/259985.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/259985.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh "sudo cat /usr/share/ca-certificates/259985.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2599852.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh "sudo cat /etc/ssl/certs/2599852.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2599852.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh "sudo cat /usr/share/ca-certificates/2599852.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-240388 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-240388 ssh "sudo systemctl is-active crio": exit status 1 (184.237529ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (1.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2293: (dbg) Done: out/minikube-linux-amd64 license: (1.043886268s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (1.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-240388 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-240388
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-240388
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-240388 image ls --format short --alsologtostderr:
I1217 19:38:40.907268  270102 out.go:360] Setting OutFile to fd 1 ...
I1217 19:38:40.907369  270102 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:38:40.907373  270102 out.go:374] Setting ErrFile to fd 2...
I1217 19:38:40.907384  270102 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:38:40.907569  270102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
I1217 19:38:40.908159  270102 config.go:182] Loaded profile config "functional-240388": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1217 19:38:40.908261  270102 config.go:182] Loaded profile config "functional-240388": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1217 19:38:40.911495  270102 ssh_runner.go:195] Run: systemctl --version
I1217 19:38:40.914134  270102 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
I1217 19:38:40.914735  270102 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
I1217 19:38:40.914782  270102 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
I1217 19:38:40.914989  270102 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/functional-240388/id_rsa Username:docker}
I1217 19:38:41.041827  270102 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-240388 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ docker.io/kicbase/echo-server               │ functional-240388 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ docker.io/library/minikube-local-cache-test │ functional-240388 │ 5b8d74f1aec09 │ 30B    │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-rc.1      │ 73f80cdc073da │ 51.7MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-rc.1      │ 5032a56602e1b │ 75.8MB │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-rc.1      │ 58865405a13bc │ 89.8MB │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ localhost/my-image                          │ functional-240388 │ c183906d8c442 │ 1.24MB │
│ public.ecr.aws/nginx/nginx                  │ alpine            │ a236f84b9d5d2 │ 53.7MB │
│ registry.k8s.io/etcd                        │ 3.6.6-0           │ 0a108f7189562 │ 62.5MB │
│ registry.k8s.io/coredns/coredns             │ v1.13.1           │ aa5e3ebc0dfed │ 78.1MB │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-rc.1      │ af0321f3a4f38 │ 70.7MB │
│ public.ecr.aws/docker/library/mysql         │ 8.4               │ 20d0be4ee4524 │ 785MB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-240388 image ls --format table --alsologtostderr:
I1217 19:38:45.352024  270340 out.go:360] Setting OutFile to fd 1 ...
I1217 19:38:45.352381  270340 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:38:45.352396  270340 out.go:374] Setting ErrFile to fd 2...
I1217 19:38:45.352404  270340 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:38:45.352796  270340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
I1217 19:38:45.353816  270340 config.go:182] Loaded profile config "functional-240388": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1217 19:38:45.353978  270340 config.go:182] Loaded profile config "functional-240388": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1217 19:38:45.357170  270340 ssh_runner.go:195] Run: systemctl --version
I1217 19:38:45.359782  270340 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
I1217 19:38:45.360398  270340 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
I1217 19:38:45.360432  270340 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
I1217 19:38:45.360677  270340 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/functional-240388/id_rsa Username:docker}
I1217 19:38:45.477110  270340 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2025/12/17 19:38:52 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-240388 image ls --format json --alsologtostderr:
[{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"c183906d8c4420b728f0f6323f7630f848de21d0c796fc597e8f5da6c1e23709","repoDigests":[],"repoTags":["localhost/my-image:functional-240388"],"size":"1240000"},{"id":"af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"70700000"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"78100000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"5b8d74f1aec09b0d1db233b75847565d3c4b36b288e0335030106f8935c4f9b4","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-240388"],"size":"30"},{"id":"58865405a13bccac1d74bc3f446dddd22e6ef0d7e
e8b52363c86dd31838976ce","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"89800000"},{"id":"73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-rc.1"],"size":"51700000"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"62500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":[],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"53700000"},{"id":"5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"75800000"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":[],"repoTags"
:["public.ecr.aws/docker/library/mysql:8.4"],"size":"785000000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-240388","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-240388 image ls --format json --alsologtostderr:
I1217 19:38:45.223439  270330 out.go:360] Setting OutFile to fd 1 ...
I1217 19:38:45.223695  270330 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:38:45.223705  270330 out.go:374] Setting ErrFile to fd 2...
I1217 19:38:45.223709  270330 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:38:45.223889  270330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
I1217 19:38:45.224516  270330 config.go:182] Loaded profile config "functional-240388": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1217 19:38:45.224650  270330 config.go:182] Loaded profile config "functional-240388": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1217 19:38:45.226911  270330 ssh_runner.go:195] Run: systemctl --version
I1217 19:38:45.229891  270330 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
I1217 19:38:45.230634  270330 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
I1217 19:38:45.230683  270330 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
I1217 19:38:45.230904  270330 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/functional-240388/id_rsa Username:docker}
I1217 19:38:45.351131  270330 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-240388 image ls --format yaml --alsologtostderr:
- id: af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "70700000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests: []
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "53700000"
- id: 5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "75800000"
- id: 73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "51700000"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "62500000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-240388
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 5b8d74f1aec09b0d1db233b75847565d3c4b36b288e0335030106f8935c4f9b4
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-240388
size: "30"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "78100000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "89800000"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests: []
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "785000000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-240388 image ls --format yaml --alsologtostderr:
I1217 19:38:41.174234  270114 out.go:360] Setting OutFile to fd 1 ...
I1217 19:38:41.174351  270114 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:38:41.174360  270114 out.go:374] Setting ErrFile to fd 2...
I1217 19:38:41.174365  270114 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:38:41.174561  270114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
I1217 19:38:41.175174  270114 config.go:182] Loaded profile config "functional-240388": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1217 19:38:41.175264  270114 config.go:182] Loaded profile config "functional-240388": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1217 19:38:41.177742  270114 ssh_runner.go:195] Run: systemctl --version
I1217 19:38:41.180320  270114 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
I1217 19:38:41.180826  270114 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
I1217 19:38:41.180871  270114 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
I1217 19:38:41.181099  270114 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/functional-240388/id_rsa Username:docker}
I1217 19:38:41.271572  270114 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (4.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-240388 ssh pgrep buildkitd: exit status 1 (186.769493ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 image build -t localhost/my-image:functional-240388 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-240388 image build -t localhost/my-image:functional-240388 testdata/build --alsologtostderr: (3.987410291s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-240388 image build -t localhost/my-image:functional-240388 testdata/build --alsologtostderr:
I1217 19:38:41.561120  270136 out.go:360] Setting OutFile to fd 1 ...
I1217 19:38:41.561441  270136 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:38:41.561454  270136 out.go:374] Setting ErrFile to fd 2...
I1217 19:38:41.561461  270136 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:38:41.561803  270136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
I1217 19:38:41.562673  270136 config.go:182] Loaded profile config "functional-240388": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1217 19:38:41.563468  270136 config.go:182] Loaded profile config "functional-240388": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1217 19:38:41.565727  270136 ssh_runner.go:195] Run: systemctl --version
I1217 19:38:41.568053  270136 main.go:143] libmachine: domain functional-240388 has defined MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
I1217 19:38:41.568451  270136 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:98:a3", ip: ""} in network mk-functional-240388: {Iface:virbr1 ExpiryTime:2025-12-17 20:31:23 +0000 UTC Type:0 Mac:52:54:00:f3:98:a3 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-240388 Clientid:01:52:54:00:f3:98:a3}
I1217 19:38:41.568482  270136 main.go:143] libmachine: domain functional-240388 has defined IP address 192.168.39.22 and MAC address 52:54:00:f3:98:a3 in network mk-functional-240388
I1217 19:38:41.568638  270136 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/functional-240388/id_rsa Username:docker}
I1217 19:38:41.665619  270136 build_images.go:162] Building image from path: /tmp/build.4228618423.tar
I1217 19:38:41.665695  270136 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 19:38:41.683849  270136 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4228618423.tar
I1217 19:38:41.691515  270136 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4228618423.tar: stat -c "%s %y" /var/lib/minikube/build/build.4228618423.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4228618423.tar': No such file or directory
I1217 19:38:41.691558  270136 ssh_runner.go:362] scp /tmp/build.4228618423.tar --> /var/lib/minikube/build/build.4228618423.tar (3072 bytes)
I1217 19:38:41.744093  270136 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4228618423
I1217 19:38:41.768790  270136 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4228618423 -xf /var/lib/minikube/build/build.4228618423.tar
I1217 19:38:41.803531  270136 docker.go:361] Building image: /var/lib/minikube/build/build.4228618423
I1217 19:38:41.803622  270136 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-240388 /var/lib/minikube/build/build.4228618423
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B 0.0s done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:c183906d8c4420b728f0f6323f7630f848de21d0c796fc597e8f5da6c1e23709
#8 writing image sha256:c183906d8c4420b728f0f6323f7630f848de21d0c796fc597e8f5da6c1e23709 0.0s done
#8 naming to localhost/my-image:functional-240388 0.0s done
#8 DONE 0.2s
I1217 19:38:45.427679  270136 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-240388 /var/lib/minikube/build/build.4228618423: (3.624026904s)
I1217 19:38:45.427784  270136 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4228618423
I1217 19:38:45.448921  270136 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4228618423.tar
I1217 19:38:45.471543  270136 build_images.go:218] Built localhost/my-image:functional-240388 from /tmp/build.4228618423.tar
I1217 19:38:45.471609  270136 build_images.go:134] succeeded building to: functional-240388
I1217 19:38:45.471618  270136 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (4.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-240388
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv/bash (0.79s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-240388 docker-env) && out/minikube-linux-amd64 status -p functional-240388"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-240388 docker-env) && docker images"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv/bash (0.79s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 image load --daemon kicbase/echo-server:functional-240388 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-240388 image load --daemon kicbase/echo-server:functional-240388 --alsologtostderr: (1.066235424s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.77s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 image load --daemon kicbase/echo-server:functional-240388 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.77s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-240388
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 image load --daemon kicbase/echo-server:functional-240388 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 image save kicbase/echo-server:functional-240388 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 image rm kicbase/echo-server:functional-240388 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (1.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (1.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.77s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-240388
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 image save --daemon kicbase/echo-server:functional-240388 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-240388
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.77s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (24.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-240388 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-240388 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-mb4qp" [97b1493f-3ab9-4afb-a94b-3602c693e686] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-mb4qp" [97b1493f-3ab9-4afb-a94b-3602c693e686] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 24.011461487s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (24.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 service list -o json
functional_test.go:1504: Took "469.737509ms" to run "out/minikube-linux-amd64 -p functional-240388 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "347.94006ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "77.674075ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.22:32445
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "363.011203ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "70.247739ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.22:32445
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (8.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-240388 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun396503464/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1766000314295222954" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun396503464/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1766000314295222954" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun396503464/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1766000314295222954" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun396503464/001/test-1766000314295222954
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-240388 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (229.33603ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 19:38:34.524902  259985 retry.go:31] will retry after 279.958573ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 19:38 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 19:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 19:38 test-1766000314295222954
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh cat /mount-9p/test-1766000314295222954
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-240388 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [9d1605d8-4896-4e6e-9ebf-2879a3229d94] Pending
helpers_test.go:353: "busybox-mount" [9d1605d8-4896-4e6e-9ebf-2879a3229d94] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [9d1605d8-4896-4e6e-9ebf-2879a3229d94] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [9d1605d8-4896-4e6e-9ebf-2879a3229d94] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.006433051s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-240388 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-240388 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun396503464/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (8.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-240388 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun596255508/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-240388 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (176.545262ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 19:38:42.776459  259985 retry.go:31] will retry after 353.121522ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-240388 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun596255508/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-240388 ssh "sudo umount -f /mount-9p": exit status 1 (197.364274ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-240388 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-240388 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun596255508/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-240388 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3604135063/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-240388 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3604135063/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-240388 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3604135063/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-240388 ssh "findmnt -T" /mount1: exit status 1 (209.162672ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 19:38:44.084671  259985 retry.go:31] will retry after 329.440319ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-240388 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-240388 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-240388 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3604135063/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-240388 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3604135063/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-240388 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3604135063/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-240388
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-240388
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-240388
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (218.32s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-811570 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-811570 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m42.834190863s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-811570 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-811570 cache add gcr.io/k8s-minikube/gvisor-addon:2: (5.504526142s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-811570 addons enable gvisor
E1217 20:13:02.086553  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-811570 addons enable gvisor: (5.822276819s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:353: "gvisor" [84d1a8e3-4466-4fb9-a7e2-d2307893c028] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004851119s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-811570 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:353: "nginx-gvisor" [5cae61ee-0090-4d96-be04-38bda56e418f] Pending
helpers_test.go:353: "nginx-gvisor" [5cae61ee-0090-4d96-be04-38bda56e418f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-gvisor" [5cae61ee-0090-4d96-be04-38bda56e418f] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 10.005741167s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-811570
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-811570: (7.180462349s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-811570 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-811570 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m8.92663522s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:353: "gvisor" [84d1a8e3-4466-4fb9-a7e2-d2307893c028] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.006440066s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:353: "nginx-gvisor" [5cae61ee-0090-4d96-be04-38bda56e418f] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.004555982s
helpers_test.go:176: Cleaning up "gvisor-811570" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-811570
--- PASS: TestGvisorAddon (218.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (210.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2 
E1217 19:40:10.408538  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:40:38.119339  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-781239 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2 : (3m30.159202704s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (210.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-781239 kubectl -- rollout status deployment/busybox: (3.938161649s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 kubectl -- exec busybox-7b57f96db7-6w8r5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 kubectl -- exec busybox-7b57f96db7-gt7zm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 kubectl -- exec busybox-7b57f96db7-m754p -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 kubectl -- exec busybox-7b57f96db7-6w8r5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 kubectl -- exec busybox-7b57f96db7-gt7zm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 kubectl -- exec busybox-7b57f96db7-m754p -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 kubectl -- exec busybox-7b57f96db7-6w8r5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 kubectl -- exec busybox-7b57f96db7-gt7zm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 kubectl -- exec busybox-7b57f96db7-m754p -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 kubectl -- exec busybox-7b57f96db7-6w8r5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 kubectl -- exec busybox-7b57f96db7-6w8r5 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 kubectl -- exec busybox-7b57f96db7-gt7zm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 kubectl -- exec busybox-7b57f96db7-gt7zm -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 kubectl -- exec busybox-7b57f96db7-m754p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 kubectl -- exec busybox-7b57f96db7-m754p -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (48.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 node add --alsologtostderr -v 5
E1217 19:43:02.087330  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:43:02.093967  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:43:02.105551  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:43:02.127228  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:43:02.168791  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:43:02.250417  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:43:02.412022  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:43:02.733890  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:43:03.375529  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:43:04.657189  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:43:07.218749  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:43:12.340761  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-781239 node add --alsologtostderr -v 5: (47.888582734s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (48.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-781239 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (11.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 status --output json --alsologtostderr -v 5
E1217 19:43:22.582631  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 cp testdata/cp-test.txt ha-781239:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 cp ha-781239:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3162783126/001/cp-test_ha-781239.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 cp ha-781239:/home/docker/cp-test.txt ha-781239-m02:/home/docker/cp-test_ha-781239_ha-781239-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239-m02 "sudo cat /home/docker/cp-test_ha-781239_ha-781239-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 cp ha-781239:/home/docker/cp-test.txt ha-781239-m03:/home/docker/cp-test_ha-781239_ha-781239-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239-m03 "sudo cat /home/docker/cp-test_ha-781239_ha-781239-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 cp ha-781239:/home/docker/cp-test.txt ha-781239-m04:/home/docker/cp-test_ha-781239_ha-781239-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239-m04 "sudo cat /home/docker/cp-test_ha-781239_ha-781239-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 cp testdata/cp-test.txt ha-781239-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 cp ha-781239-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3162783126/001/cp-test_ha-781239-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 cp ha-781239-m02:/home/docker/cp-test.txt ha-781239:/home/docker/cp-test_ha-781239-m02_ha-781239.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239 "sudo cat /home/docker/cp-test_ha-781239-m02_ha-781239.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 cp ha-781239-m02:/home/docker/cp-test.txt ha-781239-m03:/home/docker/cp-test_ha-781239-m02_ha-781239-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239-m03 "sudo cat /home/docker/cp-test_ha-781239-m02_ha-781239-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 cp ha-781239-m02:/home/docker/cp-test.txt ha-781239-m04:/home/docker/cp-test_ha-781239-m02_ha-781239-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239-m04 "sudo cat /home/docker/cp-test_ha-781239-m02_ha-781239-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 cp testdata/cp-test.txt ha-781239-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 cp ha-781239-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3162783126/001/cp-test_ha-781239-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 cp ha-781239-m03:/home/docker/cp-test.txt ha-781239:/home/docker/cp-test_ha-781239-m03_ha-781239.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239 "sudo cat /home/docker/cp-test_ha-781239-m03_ha-781239.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 cp ha-781239-m03:/home/docker/cp-test.txt ha-781239-m02:/home/docker/cp-test_ha-781239-m03_ha-781239-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239-m02 "sudo cat /home/docker/cp-test_ha-781239-m03_ha-781239-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 cp ha-781239-m03:/home/docker/cp-test.txt ha-781239-m04:/home/docker/cp-test_ha-781239-m03_ha-781239-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239-m04 "sudo cat /home/docker/cp-test_ha-781239-m03_ha-781239-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 cp testdata/cp-test.txt ha-781239-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 cp ha-781239-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3162783126/001/cp-test_ha-781239-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 cp ha-781239-m04:/home/docker/cp-test.txt ha-781239:/home/docker/cp-test_ha-781239-m04_ha-781239.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239 "sudo cat /home/docker/cp-test_ha-781239-m04_ha-781239.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 cp ha-781239-m04:/home/docker/cp-test.txt ha-781239-m02:/home/docker/cp-test_ha-781239-m04_ha-781239-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239-m02 "sudo cat /home/docker/cp-test_ha-781239-m04_ha-781239-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 cp ha-781239-m04:/home/docker/cp-test.txt ha-781239-m03:/home/docker/cp-test_ha-781239-m04_ha-781239-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 ssh -n ha-781239-m03 "sudo cat /home/docker/cp-test_ha-781239-m04_ha-781239-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (11.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 node stop m02 --alsologtostderr -v 5
E1217 19:43:43.065076  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-781239 node stop m02 --alsologtostderr -v 5: (11.91914467s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-781239 status --alsologtostderr -v 5: exit status 7 (541.856366ms)

                                                
                                                
-- stdout --
	ha-781239
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-781239-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-781239-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-781239-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:43:45.524207  273123 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:43:45.524436  273123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:43:45.524444  273123 out.go:374] Setting ErrFile to fd 2...
	I1217 19:43:45.524449  273123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:43:45.524684  273123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
	I1217 19:43:45.524853  273123 out.go:368] Setting JSON to false
	I1217 19:43:45.524890  273123 mustload.go:66] Loading cluster: ha-781239
	I1217 19:43:45.525010  273123 notify.go:221] Checking for updates...
	I1217 19:43:45.525326  273123 config.go:182] Loaded profile config "ha-781239": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1217 19:43:45.525346  273123 status.go:174] checking status of ha-781239 ...
	I1217 19:43:45.527720  273123 status.go:371] ha-781239 host status = "Running" (err=<nil>)
	I1217 19:43:45.527737  273123 host.go:66] Checking if "ha-781239" exists ...
	I1217 19:43:45.530841  273123 main.go:143] libmachine: domain ha-781239 has defined MAC address 52:54:00:bb:2c:a7 in network mk-ha-781239
	I1217 19:43:45.531588  273123 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:2c:a7", ip: ""} in network mk-ha-781239: {Iface:virbr1 ExpiryTime:2025-12-17 20:39:09 +0000 UTC Type:0 Mac:52:54:00:bb:2c:a7 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-781239 Clientid:01:52:54:00:bb:2c:a7}
	I1217 19:43:45.531652  273123 main.go:143] libmachine: domain ha-781239 has defined IP address 192.168.39.62 and MAC address 52:54:00:bb:2c:a7 in network mk-ha-781239
	I1217 19:43:45.531835  273123 host.go:66] Checking if "ha-781239" exists ...
	I1217 19:43:45.532138  273123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 19:43:45.534752  273123 main.go:143] libmachine: domain ha-781239 has defined MAC address 52:54:00:bb:2c:a7 in network mk-ha-781239
	I1217 19:43:45.535273  273123 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:2c:a7", ip: ""} in network mk-ha-781239: {Iface:virbr1 ExpiryTime:2025-12-17 20:39:09 +0000 UTC Type:0 Mac:52:54:00:bb:2c:a7 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-781239 Clientid:01:52:54:00:bb:2c:a7}
	I1217 19:43:45.535303  273123 main.go:143] libmachine: domain ha-781239 has defined IP address 192.168.39.62 and MAC address 52:54:00:bb:2c:a7 in network mk-ha-781239
	I1217 19:43:45.535501  273123 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/ha-781239/id_rsa Username:docker}
	I1217 19:43:45.624334  273123 ssh_runner.go:195] Run: systemctl --version
	I1217 19:43:45.630882  273123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:43:45.652785  273123 kubeconfig.go:125] found "ha-781239" server: "https://192.168.39.254:8443"
	I1217 19:43:45.652842  273123 api_server.go:166] Checking apiserver status ...
	I1217 19:43:45.652917  273123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:43:45.677262  273123 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2468/cgroup
	W1217 19:43:45.690191  273123 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2468/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 19:43:45.690258  273123 ssh_runner.go:195] Run: ls
	I1217 19:43:45.696462  273123 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1217 19:43:45.703269  273123 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1217 19:43:45.703307  273123 status.go:463] ha-781239 apiserver status = Running (err=<nil>)
	I1217 19:43:45.703336  273123 status.go:176] ha-781239 status: &{Name:ha-781239 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 19:43:45.703360  273123 status.go:174] checking status of ha-781239-m02 ...
	I1217 19:43:45.704996  273123 status.go:371] ha-781239-m02 host status = "Stopped" (err=<nil>)
	I1217 19:43:45.705018  273123 status.go:384] host is not running, skipping remaining checks
	I1217 19:43:45.705026  273123 status.go:176] ha-781239-m02 status: &{Name:ha-781239-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 19:43:45.705047  273123 status.go:174] checking status of ha-781239-m03 ...
	I1217 19:43:45.706364  273123 status.go:371] ha-781239-m03 host status = "Running" (err=<nil>)
	I1217 19:43:45.706385  273123 host.go:66] Checking if "ha-781239-m03" exists ...
	I1217 19:43:45.709286  273123 main.go:143] libmachine: domain ha-781239-m03 has defined MAC address 52:54:00:c4:5b:09 in network mk-ha-781239
	I1217 19:43:45.709772  273123 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c4:5b:09", ip: ""} in network mk-ha-781239: {Iface:virbr1 ExpiryTime:2025-12-17 20:41:20 +0000 UTC Type:0 Mac:52:54:00:c4:5b:09 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-781239-m03 Clientid:01:52:54:00:c4:5b:09}
	I1217 19:43:45.709796  273123 main.go:143] libmachine: domain ha-781239-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:c4:5b:09 in network mk-ha-781239
	I1217 19:43:45.709961  273123 host.go:66] Checking if "ha-781239-m03" exists ...
	I1217 19:43:45.710239  273123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 19:43:45.713221  273123 main.go:143] libmachine: domain ha-781239-m03 has defined MAC address 52:54:00:c4:5b:09 in network mk-ha-781239
	I1217 19:43:45.713712  273123 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c4:5b:09", ip: ""} in network mk-ha-781239: {Iface:virbr1 ExpiryTime:2025-12-17 20:41:20 +0000 UTC Type:0 Mac:52:54:00:c4:5b:09 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-781239-m03 Clientid:01:52:54:00:c4:5b:09}
	I1217 19:43:45.713736  273123 main.go:143] libmachine: domain ha-781239-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:c4:5b:09 in network mk-ha-781239
	I1217 19:43:45.713906  273123 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/ha-781239-m03/id_rsa Username:docker}
	I1217 19:43:45.805652  273123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:43:45.823494  273123 kubeconfig.go:125] found "ha-781239" server: "https://192.168.39.254:8443"
	I1217 19:43:45.823536  273123 api_server.go:166] Checking apiserver status ...
	I1217 19:43:45.823584  273123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:43:45.851136  273123 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2345/cgroup
	W1217 19:43:45.865172  273123 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2345/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 19:43:45.865230  273123 ssh_runner.go:195] Run: ls
	I1217 19:43:45.870729  273123 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1217 19:43:45.875640  273123 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1217 19:43:45.875684  273123 status.go:463] ha-781239-m03 apiserver status = Running (err=<nil>)
	I1217 19:43:45.875697  273123 status.go:176] ha-781239-m03 status: &{Name:ha-781239-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 19:43:45.875724  273123 status.go:174] checking status of ha-781239-m04 ...
	I1217 19:43:45.877482  273123 status.go:371] ha-781239-m04 host status = "Running" (err=<nil>)
	I1217 19:43:45.877501  273123 host.go:66] Checking if "ha-781239-m04" exists ...
	I1217 19:43:45.880039  273123 main.go:143] libmachine: domain ha-781239-m04 has defined MAC address 52:54:00:04:50:21 in network mk-ha-781239
	I1217 19:43:45.880451  273123 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:50:21", ip: ""} in network mk-ha-781239: {Iface:virbr1 ExpiryTime:2025-12-17 20:42:48 +0000 UTC Type:0 Mac:52:54:00:04:50:21 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-781239-m04 Clientid:01:52:54:00:04:50:21}
	I1217 19:43:45.880477  273123 main.go:143] libmachine: domain ha-781239-m04 has defined IP address 192.168.39.189 and MAC address 52:54:00:04:50:21 in network mk-ha-781239
	I1217 19:43:45.880627  273123 host.go:66] Checking if "ha-781239-m04" exists ...
	I1217 19:43:45.880809  273123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 19:43:45.883101  273123 main.go:143] libmachine: domain ha-781239-m04 has defined MAC address 52:54:00:04:50:21 in network mk-ha-781239
	I1217 19:43:45.883617  273123 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:50:21", ip: ""} in network mk-ha-781239: {Iface:virbr1 ExpiryTime:2025-12-17 20:42:48 +0000 UTC Type:0 Mac:52:54:00:04:50:21 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-781239-m04 Clientid:01:52:54:00:04:50:21}
	I1217 19:43:45.883642  273123 main.go:143] libmachine: domain ha-781239-m04 has defined IP address 192.168.39.189 and MAC address 52:54:00:04:50:21 in network mk-ha-781239
	I1217 19:43:45.883799  273123 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/ha-781239-m04/id_rsa Username:docker}
	I1217 19:43:45.972171  273123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:43:45.991576  273123 status.go:176] ha-781239-m04 status: &{Name:ha-781239-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (25.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 node start m02 --alsologtostderr -v 5
E1217 19:43:53.568689  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-781239 node start m02 --alsologtostderr -v 5: (24.869446218s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (25.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (168.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 stop --alsologtostderr -v 5
E1217 19:44:24.027228  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-781239 stop --alsologtostderr -v 5: (42.473588496s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 start --wait true --alsologtostderr -v 5
E1217 19:45:10.407189  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:45:16.635835  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:45:45.948883  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-781239 start --wait true --alsologtostderr -v 5: (2m5.521665574s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (168.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (6.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-781239 node delete m03 --alsologtostderr -v 5: (5.732384135s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (6.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (38.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-781239 stop --alsologtostderr -v 5: (38.206256887s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-781239 status --alsologtostderr -v 5: exit status 7 (71.664294ms)

                                                
                                                
-- stdout --
	ha-781239
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-781239-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-781239-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:47:46.665467  274701 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:47:46.665602  274701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:47:46.665611  274701 out.go:374] Setting ErrFile to fd 2...
	I1217 19:47:46.665616  274701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:47:46.665838  274701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
	I1217 19:47:46.666005  274701 out.go:368] Setting JSON to false
	I1217 19:47:46.666041  274701 mustload.go:66] Loading cluster: ha-781239
	I1217 19:47:46.666269  274701 notify.go:221] Checking for updates...
	I1217 19:47:46.666388  274701 config.go:182] Loaded profile config "ha-781239": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1217 19:47:46.666404  274701 status.go:174] checking status of ha-781239 ...
	I1217 19:47:46.668561  274701 status.go:371] ha-781239 host status = "Stopped" (err=<nil>)
	I1217 19:47:46.668578  274701 status.go:384] host is not running, skipping remaining checks
	I1217 19:47:46.668584  274701 status.go:176] ha-781239 status: &{Name:ha-781239 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 19:47:46.668613  274701 status.go:174] checking status of ha-781239-m02 ...
	I1217 19:47:46.669939  274701 status.go:371] ha-781239-m02 host status = "Stopped" (err=<nil>)
	I1217 19:47:46.669957  274701 status.go:384] host is not running, skipping remaining checks
	I1217 19:47:46.669961  274701 status.go:176] ha-781239-m02 status: &{Name:ha-781239-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 19:47:46.669974  274701 status.go:174] checking status of ha-781239-m04 ...
	I1217 19:47:46.671088  274701 status.go:371] ha-781239-m04 host status = "Stopped" (err=<nil>)
	I1217 19:47:46.671103  274701 status.go:384] host is not running, skipping remaining checks
	I1217 19:47:46.671107  274701 status.go:176] ha-781239-m04 status: &{Name:ha-781239-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (38.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (113.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 start --wait true --alsologtostderr -v 5 --driver=kvm2 
E1217 19:48:02.087524  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:48:29.790398  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:48:53.568884  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-781239 start --wait true --alsologtostderr -v 5 --driver=kvm2 : (1m52.811660194s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (113.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (90.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 node add --control-plane --alsologtostderr -v 5
E1217 19:50:10.409009  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-781239 node add --control-plane --alsologtostderr -v 5: (1m30.134241145s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-781239 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (90.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.71s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (38.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-105508 --driver=kvm2 
E1217 19:51:33.481504  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-105508 --driver=kvm2 : (38.507552533s)
--- PASS: TestImageBuild/serial/Setup (38.51s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.56s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-105508
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-105508: (1.560478908s)
--- PASS: TestImageBuild/serial/NormalBuild (1.56s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-105508
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-105508: (1.019824771s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.02s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.72s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-105508
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.72s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-105508
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.78s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-818399 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2 
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-818399 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2 : (55.781758073s)
--- PASS: TestJSONOutput/start/Command (55.78s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-818399 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-818399 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (14.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-818399 --output=json --user=testUser
E1217 19:53:02.087073  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-818399 --output=json --user=testUser: (14.762801513s)
--- PASS: TestJSONOutput/stop/Command (14.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-169580 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-169580 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (87.415703ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"81010236-cb80-4f29-b198-bcacca39b0cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-169580] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"87392ce1-ded7-48f1-85d6-344e2b8b384c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22186"}}
	{"specversion":"1.0","id":"a9fc9941-358d-4c4b-96ff-88ae8f893b20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"313cb6b1-907e-4423-afbb-bcf4b6b43641","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22186-255930/kubeconfig"}}
	{"specversion":"1.0","id":"f369eab7-c9c6-499a-adc2-d563bfbf25e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-255930/.minikube"}}
	{"specversion":"1.0","id":"2e91ab55-2721-4eed-b57a-3e86420d9fc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"49190717-d78b-4da6-8972-a608230bf679","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3bfca39c-4a0f-4502-b3cd-4f3b0f9a2810","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-169580" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-169580
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (87.3s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-613089 --driver=kvm2 
E1217 19:53:53.568895  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-613089 --driver=kvm2 : (43.801272867s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-616394 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-616394 --driver=kvm2 : (40.793243002s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-613089
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-616394
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-616394" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-616394
helpers_test.go:176: Cleaning up "first-613089" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-613089
--- PASS: TestMinikubeProfile (87.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-851920 --memory=3072 --mount-string /tmp/TestMountStartserial1656511031/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-851920 --memory=3072 --mount-string /tmp/TestMountStartserial1656511031/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (19.561939171s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-851920 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-851920 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (21.44s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-872103 --memory=3072 --mount-string /tmp/TestMountStartserial1656511031/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E1217 19:55:10.413688  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-872103 --memory=3072 --mount-string /tmp/TestMountStartserial1656511031/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (20.435825797s)
--- PASS: TestMountStart/serial/StartWithMountSecond (21.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-872103 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-872103 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.74s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-851920 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-872103 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-872103 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-872103
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-872103: (1.290665457s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.03s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-872103
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-872103: (20.032979928s)
--- PASS: TestMountStart/serial/RestartStopped (21.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-872103 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-872103 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-814341 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2 
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-814341 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2 : (1m50.278595533s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.62s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814341 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814341 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-814341 -- rollout status deployment/busybox: (3.783796421s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814341 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814341 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814341 -- exec busybox-7b57f96db7-b8rt8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814341 -- exec busybox-7b57f96db7-jqvzk -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814341 -- exec busybox-7b57f96db7-b8rt8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814341 -- exec busybox-7b57f96db7-jqvzk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814341 -- exec busybox-7b57f96db7-b8rt8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814341 -- exec busybox-7b57f96db7-jqvzk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.53s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814341 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814341 -- exec busybox-7b57f96db7-b8rt8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814341 -- exec busybox-7b57f96db7-b8rt8 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814341 -- exec busybox-7b57f96db7-jqvzk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814341 -- exec busybox-7b57f96db7-jqvzk -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-814341 -v=5 --alsologtostderr
E1217 19:58:02.087654  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-814341 -v=5 --alsologtostderr: (49.863717305s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.32s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-814341 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.49s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 cp testdata/cp-test.txt multinode-814341:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 ssh -n multinode-814341 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 cp multinode-814341:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile759196652/001/cp-test_multinode-814341.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 ssh -n multinode-814341 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 cp multinode-814341:/home/docker/cp-test.txt multinode-814341-m02:/home/docker/cp-test_multinode-814341_multinode-814341-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 ssh -n multinode-814341 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 ssh -n multinode-814341-m02 "sudo cat /home/docker/cp-test_multinode-814341_multinode-814341-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 cp multinode-814341:/home/docker/cp-test.txt multinode-814341-m03:/home/docker/cp-test_multinode-814341_multinode-814341-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 ssh -n multinode-814341 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 ssh -n multinode-814341-m03 "sudo cat /home/docker/cp-test_multinode-814341_multinode-814341-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 cp testdata/cp-test.txt multinode-814341-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 ssh -n multinode-814341-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 cp multinode-814341-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile759196652/001/cp-test_multinode-814341-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 ssh -n multinode-814341-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 cp multinode-814341-m02:/home/docker/cp-test.txt multinode-814341:/home/docker/cp-test_multinode-814341-m02_multinode-814341.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 ssh -n multinode-814341-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 ssh -n multinode-814341 "sudo cat /home/docker/cp-test_multinode-814341-m02_multinode-814341.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 cp multinode-814341-m02:/home/docker/cp-test.txt multinode-814341-m03:/home/docker/cp-test_multinode-814341-m02_multinode-814341-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 ssh -n multinode-814341-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 ssh -n multinode-814341-m03 "sudo cat /home/docker/cp-test_multinode-814341-m02_multinode-814341-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 cp testdata/cp-test.txt multinode-814341-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 ssh -n multinode-814341-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 cp multinode-814341-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile759196652/001/cp-test_multinode-814341-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 ssh -n multinode-814341-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 cp multinode-814341-m03:/home/docker/cp-test.txt multinode-814341:/home/docker/cp-test_multinode-814341-m03_multinode-814341.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 ssh -n multinode-814341-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 ssh -n multinode-814341 "sudo cat /home/docker/cp-test_multinode-814341-m03_multinode-814341.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 cp multinode-814341-m03:/home/docker/cp-test.txt multinode-814341-m02:/home/docker/cp-test_multinode-814341-m03_multinode-814341-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 ssh -n multinode-814341-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 ssh -n multinode-814341-m02 "sudo cat /home/docker/cp-test_multinode-814341-m03_multinode-814341-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-814341 node stop m03: (1.74327048s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-814341 status: exit status 7 (334.440139ms)

                                                
                                                
-- stdout --
	multinode-814341
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-814341-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-814341-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-814341 status --alsologtostderr: exit status 7 (342.198916ms)

                                                
                                                
-- stdout --
	multinode-814341
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-814341-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-814341-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:58:42.529684  280861 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:58:42.529958  280861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:58:42.529968  280861 out.go:374] Setting ErrFile to fd 2...
	I1217 19:58:42.529972  280861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:58:42.530182  280861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
	I1217 19:58:42.530351  280861 out.go:368] Setting JSON to false
	I1217 19:58:42.530384  280861 mustload.go:66] Loading cluster: multinode-814341
	I1217 19:58:42.530530  280861 notify.go:221] Checking for updates...
	I1217 19:58:42.530762  280861 config.go:182] Loaded profile config "multinode-814341": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1217 19:58:42.530783  280861 status.go:174] checking status of multinode-814341 ...
	I1217 19:58:42.533228  280861 status.go:371] multinode-814341 host status = "Running" (err=<nil>)
	I1217 19:58:42.533244  280861 host.go:66] Checking if "multinode-814341" exists ...
	I1217 19:58:42.535820  280861 main.go:143] libmachine: domain multinode-814341 has defined MAC address 52:54:00:50:b2:f3 in network mk-multinode-814341
	I1217 19:58:42.536290  280861 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:b2:f3", ip: ""} in network mk-multinode-814341: {Iface:virbr1 ExpiryTime:2025-12-17 20:56:01 +0000 UTC Type:0 Mac:52:54:00:50:b2:f3 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-814341 Clientid:01:52:54:00:50:b2:f3}
	I1217 19:58:42.536324  280861 main.go:143] libmachine: domain multinode-814341 has defined IP address 192.168.39.77 and MAC address 52:54:00:50:b2:f3 in network mk-multinode-814341
	I1217 19:58:42.536477  280861 host.go:66] Checking if "multinode-814341" exists ...
	I1217 19:58:42.536765  280861 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 19:58:42.539407  280861 main.go:143] libmachine: domain multinode-814341 has defined MAC address 52:54:00:50:b2:f3 in network mk-multinode-814341
	I1217 19:58:42.539771  280861 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:b2:f3", ip: ""} in network mk-multinode-814341: {Iface:virbr1 ExpiryTime:2025-12-17 20:56:01 +0000 UTC Type:0 Mac:52:54:00:50:b2:f3 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-814341 Clientid:01:52:54:00:50:b2:f3}
	I1217 19:58:42.539792  280861 main.go:143] libmachine: domain multinode-814341 has defined IP address 192.168.39.77 and MAC address 52:54:00:50:b2:f3 in network mk-multinode-814341
	I1217 19:58:42.539930  280861 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/multinode-814341/id_rsa Username:docker}
	I1217 19:58:42.621515  280861 ssh_runner.go:195] Run: systemctl --version
	I1217 19:58:42.630559  280861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:58:42.649882  280861 kubeconfig.go:125] found "multinode-814341" server: "https://192.168.39.77:8443"
	I1217 19:58:42.649923  280861 api_server.go:166] Checking apiserver status ...
	I1217 19:58:42.649960  280861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:58:42.670686  280861 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2370/cgroup
	W1217 19:58:42.684314  280861 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2370/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 19:58:42.684372  280861 ssh_runner.go:195] Run: ls
	I1217 19:58:42.690576  280861 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8443/healthz ...
	I1217 19:58:42.695223  280861 api_server.go:279] https://192.168.39.77:8443/healthz returned 200:
	ok
	I1217 19:58:42.695249  280861 status.go:463] multinode-814341 apiserver status = Running (err=<nil>)
	I1217 19:58:42.695259  280861 status.go:176] multinode-814341 status: &{Name:multinode-814341 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 19:58:42.695275  280861 status.go:174] checking status of multinode-814341-m02 ...
	I1217 19:58:42.696934  280861 status.go:371] multinode-814341-m02 host status = "Running" (err=<nil>)
	I1217 19:58:42.696953  280861 host.go:66] Checking if "multinode-814341-m02" exists ...
	I1217 19:58:42.699432  280861 main.go:143] libmachine: domain multinode-814341-m02 has defined MAC address 52:54:00:4b:43:62 in network mk-multinode-814341
	I1217 19:58:42.699933  280861 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:43:62", ip: ""} in network mk-multinode-814341: {Iface:virbr1 ExpiryTime:2025-12-17 20:57:03 +0000 UTC Type:0 Mac:52:54:00:4b:43:62 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-814341-m02 Clientid:01:52:54:00:4b:43:62}
	I1217 19:58:42.699956  280861 main.go:143] libmachine: domain multinode-814341-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:4b:43:62 in network mk-multinode-814341
	I1217 19:58:42.700104  280861 host.go:66] Checking if "multinode-814341-m02" exists ...
	I1217 19:58:42.700357  280861 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 19:58:42.702577  280861 main.go:143] libmachine: domain multinode-814341-m02 has defined MAC address 52:54:00:4b:43:62 in network mk-multinode-814341
	I1217 19:58:42.703103  280861 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:43:62", ip: ""} in network mk-multinode-814341: {Iface:virbr1 ExpiryTime:2025-12-17 20:57:03 +0000 UTC Type:0 Mac:52:54:00:4b:43:62 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-814341-m02 Clientid:01:52:54:00:4b:43:62}
	I1217 19:58:42.703128  280861 main.go:143] libmachine: domain multinode-814341-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:4b:43:62 in network mk-multinode-814341
	I1217 19:58:42.703287  280861 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-255930/.minikube/machines/multinode-814341-m02/id_rsa Username:docker}
	I1217 19:58:42.788930  280861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:58:42.806180  280861 status.go:176] multinode-814341-m02 status: &{Name:multinode-814341-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1217 19:58:42.806222  280861 status.go:174] checking status of multinode-814341-m03 ...
	I1217 19:58:42.808014  280861 status.go:371] multinode-814341-m03 host status = "Stopped" (err=<nil>)
	I1217 19:58:42.808032  280861 status.go:384] host is not running, skipping remaining checks
	I1217 19:58:42.808037  280861 status.go:176] multinode-814341-m03 status: &{Name:multinode-814341-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (45.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 node start m03 -v=5 --alsologtostderr
E1217 19:58:53.567937  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:59:25.153279  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-814341 node start m03 -v=5 --alsologtostderr: (44.488301091s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (45.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (165.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-814341
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-814341
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-814341: (26.553286194s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-814341 --wait=true -v=5 --alsologtostderr
E1217 20:00:10.408019  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:01:56.637845  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-814341 --wait=true -v=5 --alsologtostderr: (2m18.873630961s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-814341
--- PASS: TestMultiNode/serial/RestartKeepsNodes (165.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-814341 node delete m03: (1.697827216s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-814341 stop: (25.117317768s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-814341 status: exit status 7 (71.552051ms)

                                                
                                                
-- stdout --
	multinode-814341
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-814341-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-814341 status --alsologtostderr: exit status 7 (68.043428ms)

                                                
                                                
-- stdout --
	multinode-814341
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-814341-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:02:40.876652  282305 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:02:40.876923  282305 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:02:40.876934  282305 out.go:374] Setting ErrFile to fd 2...
	I1217 20:02:40.876938  282305 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:02:40.877216  282305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
	I1217 20:02:40.877438  282305 out.go:368] Setting JSON to false
	I1217 20:02:40.877482  282305 mustload.go:66] Loading cluster: multinode-814341
	I1217 20:02:40.877554  282305 notify.go:221] Checking for updates...
	I1217 20:02:40.877910  282305 config.go:182] Loaded profile config "multinode-814341": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1217 20:02:40.877926  282305 status.go:174] checking status of multinode-814341 ...
	I1217 20:02:40.880302  282305 status.go:371] multinode-814341 host status = "Stopped" (err=<nil>)
	I1217 20:02:40.880322  282305 status.go:384] host is not running, skipping remaining checks
	I1217 20:02:40.880329  282305 status.go:176] multinode-814341 status: &{Name:multinode-814341 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 20:02:40.880351  282305 status.go:174] checking status of multinode-814341-m02 ...
	I1217 20:02:40.881586  282305 status.go:371] multinode-814341-m02 host status = "Stopped" (err=<nil>)
	I1217 20:02:40.881614  282305 status.go:384] host is not running, skipping remaining checks
	I1217 20:02:40.881620  282305 status.go:176] multinode-814341-m02 status: &{Name:multinode-814341-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.26s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (85.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-814341 --wait=true -v=5 --alsologtostderr --driver=kvm2 
E1217 20:03:02.087093  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:03:53.568655  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-814341 --wait=true -v=5 --alsologtostderr --driver=kvm2 : (1m24.76598102s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814341 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (85.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-814341
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-814341-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-814341-m02 --driver=kvm2 : exit status 14 (86.695014ms)

                                                
                                                
-- stdout --
	* [multinode-814341-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-255930/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-255930/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-814341-m02' is duplicated with machine name 'multinode-814341-m02' in profile 'multinode-814341'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-814341-m03 --driver=kvm2 
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-814341-m03 --driver=kvm2 : (45.240181882s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-814341
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-814341: exit status 80 (219.459749ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-814341 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-814341-m03 already exists in multinode-814341-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-814341-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.46s)

                                                
                                    
x
+
TestPreload (129.56s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-803290 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2 
E1217 20:05:10.407296  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-803290 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2 : (1m8.776313976s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-803290 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-803290 image pull gcr.io/k8s-minikube/busybox: (2.217632602s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-803290
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-803290: (14.600684801s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-803290 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2 
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-803290 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2 : (42.8666648s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-803290 image list
helpers_test.go:176: Cleaning up "test-preload-803290" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-803290
--- PASS: TestPreload (129.56s)

                                                
                                    
x
+
TestScheduledStopUnix (115.13s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-841689 --memory=3072 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-841689 --memory=3072 --driver=kvm2 : (43.459821837s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-841689 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 20:07:47.217020  284492 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:07:47.217294  284492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:07:47.217305  284492 out.go:374] Setting ErrFile to fd 2...
	I1217 20:07:47.217310  284492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:07:47.217520  284492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
	I1217 20:07:47.217802  284492 out.go:368] Setting JSON to false
	I1217 20:07:47.217894  284492 mustload.go:66] Loading cluster: scheduled-stop-841689
	I1217 20:07:47.218216  284492 config.go:182] Loaded profile config "scheduled-stop-841689": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1217 20:07:47.218285  284492 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/scheduled-stop-841689/config.json ...
	I1217 20:07:47.218473  284492 mustload.go:66] Loading cluster: scheduled-stop-841689
	I1217 20:07:47.218630  284492 config.go:182] Loaded profile config "scheduled-stop-841689": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-841689 -n scheduled-stop-841689
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-841689 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 20:07:47.508852  284536 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:07:47.509127  284536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:07:47.509138  284536 out.go:374] Setting ErrFile to fd 2...
	I1217 20:07:47.509142  284536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:07:47.509357  284536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
	I1217 20:07:47.509626  284536 out.go:368] Setting JSON to false
	I1217 20:07:47.509826  284536 daemonize_unix.go:73] killing process 284526 as it is an old scheduled stop
	I1217 20:07:47.509929  284536 mustload.go:66] Loading cluster: scheduled-stop-841689
	I1217 20:07:47.510269  284536 config.go:182] Loaded profile config "scheduled-stop-841689": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1217 20:07:47.510336  284536 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/scheduled-stop-841689/config.json ...
	I1217 20:07:47.510526  284536 mustload.go:66] Loading cluster: scheduled-stop-841689
	I1217 20:07:47.510653  284536 config.go:182] Loaded profile config "scheduled-stop-841689": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1217 20:07:47.517362  259985 retry.go:31] will retry after 74.704µs: open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/scheduled-stop-841689/pid: no such file or directory
I1217 20:07:47.518542  259985 retry.go:31] will retry after 106.525µs: open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/scheduled-stop-841689/pid: no such file or directory
I1217 20:07:47.519681  259985 retry.go:31] will retry after 145.509µs: open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/scheduled-stop-841689/pid: no such file or directory
I1217 20:07:47.520813  259985 retry.go:31] will retry after 399.681µs: open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/scheduled-stop-841689/pid: no such file or directory
I1217 20:07:47.521947  259985 retry.go:31] will retry after 306.175µs: open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/scheduled-stop-841689/pid: no such file or directory
I1217 20:07:47.523106  259985 retry.go:31] will retry after 621.849µs: open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/scheduled-stop-841689/pid: no such file or directory
I1217 20:07:47.524238  259985 retry.go:31] will retry after 1.137216ms: open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/scheduled-stop-841689/pid: no such file or directory
I1217 20:07:47.526483  259985 retry.go:31] will retry after 1.245913ms: open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/scheduled-stop-841689/pid: no such file or directory
I1217 20:07:47.528760  259985 retry.go:31] will retry after 3.56981ms: open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/scheduled-stop-841689/pid: no such file or directory
I1217 20:07:47.532981  259985 retry.go:31] will retry after 1.98511ms: open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/scheduled-stop-841689/pid: no such file or directory
I1217 20:07:47.535239  259985 retry.go:31] will retry after 5.699233ms: open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/scheduled-stop-841689/pid: no such file or directory
I1217 20:07:47.541464  259985 retry.go:31] will retry after 12.382197ms: open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/scheduled-stop-841689/pid: no such file or directory
I1217 20:07:47.554800  259985 retry.go:31] will retry after 6.907822ms: open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/scheduled-stop-841689/pid: no such file or directory
I1217 20:07:47.562088  259985 retry.go:31] will retry after 10.663337ms: open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/scheduled-stop-841689/pid: no such file or directory
I1217 20:07:47.573390  259985 retry.go:31] will retry after 43.173351ms: open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/scheduled-stop-841689/pid: no such file or directory
I1217 20:07:47.617706  259985 retry.go:31] will retry after 37.25812ms: open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/scheduled-stop-841689/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-841689 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1217 20:08:02.087326  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-841689 -n scheduled-stop-841689
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-841689
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-841689 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 20:08:13.242982  284684 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:08:13.243112  284684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:08:13.243120  284684 out.go:374] Setting ErrFile to fd 2...
	I1217 20:08:13.243133  284684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:08:13.243341  284684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-255930/.minikube/bin
	I1217 20:08:13.243652  284684 out.go:368] Setting JSON to false
	I1217 20:08:13.243755  284684 mustload.go:66] Loading cluster: scheduled-stop-841689
	I1217 20:08:13.244108  284684 config.go:182] Loaded profile config "scheduled-stop-841689": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1217 20:08:13.244194  284684 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/scheduled-stop-841689/config.json ...
	I1217 20:08:13.244402  284684 mustload.go:66] Loading cluster: scheduled-stop-841689
	I1217 20:08:13.244524  284684 config.go:182] Loaded profile config "scheduled-stop-841689": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
E1217 20:08:13.483662  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1217 20:08:53.569216  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-841689
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-841689: exit status 7 (65.285315ms)

                                                
                                                
-- stdout --
	scheduled-stop-841689
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-841689 -n scheduled-stop-841689
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-841689 -n scheduled-stop-841689: exit status 7 (65.798183ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-841689" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-841689
--- PASS: TestScheduledStopUnix (115.13s)

                                                
                                    
x
+
TestSkaffold (125.2s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3165494820 version
skaffold_test.go:63: skaffold version: v2.17.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-444067 --memory=3072 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-444067 --memory=3072 --driver=kvm2 : (40.846722259s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3165494820 run --minikube-profile skaffold-444067 --kube-context skaffold-444067 --status-check=true --port-forward=false --interactive=false
E1217 20:10:10.412274  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3165494820 run --minikube-profile skaffold-444067 --kube-context skaffold-444067 --status-check=true --port-forward=false --interactive=false: (1m8.782707432s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:353: "leeroy-app-977cf5dfd-w7bz8" [000d4dfd-a1af-428f-af76-480b909beae8] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.005127797s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:353: "leeroy-web-6d5f94d497-k6lj9" [d25e0a5a-3d94-4e84-a372-1f16d6ff8682] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004104209s
helpers_test.go:176: Cleaning up "skaffold-444067" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-444067
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-444067: (1.030588471s)
--- PASS: TestSkaffold (125.20s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (413.28s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2292794652 start -p running-upgrade-545642 --memory=3072 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2292794652 start -p running-upgrade-545642 --memory=3072 --vm-driver=kvm2 : (1m36.105484307s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-545642 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-545642 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 : (5m13.023379212s)
helpers_test.go:176: Cleaning up "running-upgrade-545642" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-545642
--- PASS: TestRunningBinaryUpgrade (413.28s)

                                                
                                    
x
+
TestKubernetesUpgrade (226.99s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-184336 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-184336 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2 : (1m6.769587877s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-184336
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-184336: (3.071292388s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-184336 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-184336 status --format={{.Host}}: exit status 7 (92.488247ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-184336 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2 
E1217 20:13:53.568048  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-184336 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2 : (1m16.647060784s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-184336 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-184336 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-184336 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2 : exit status 106 (102.257147ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-184336] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-255930/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-255930/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-rc.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-184336
	    minikube start -p kubernetes-upgrade-184336 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1843362 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-184336 --kubernetes-version=v1.35.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-184336 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2 
E1217 20:15:10.407961  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:15:51.964878  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/skaffold-444067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:15:51.971486  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/skaffold-444067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:15:51.983029  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/skaffold-444067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:15:52.004684  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/skaffold-444067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:15:52.046291  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/skaffold-444067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:15:52.128148  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/skaffold-444067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:15:52.290067  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/skaffold-444067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:15:52.611921  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/skaffold-444067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:15:53.253941  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/skaffold-444067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:15:54.535906  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/skaffold-444067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:15:57.098269  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/skaffold-444067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:16:02.219917  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/skaffold-444067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:16:05.155257  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-184336 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2 : (1m19.04648822s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-184336" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-184336
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-184336: (1.195023288s)
--- PASS: TestKubernetesUpgrade (226.99s)

                                                
                                    
x
+
TestISOImage/Setup (22.4s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-021194 --no-kubernetes --driver=kvm2 
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-021194 --no-kubernetes --driver=kvm2 : (22.396492589s)
--- PASS: TestISOImage/Setup (22.40s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-021194 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-021194 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-021194 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-021194 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-021194 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-021194 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-021194 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-021194 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-021194 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-021194 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-021194 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (164.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.292971080 start -p stopped-upgrade-301553 --memory=3072 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.292971080 start -p stopped-upgrade-301553 --memory=3072 --vm-driver=kvm2 : (1m2.977243455s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.292971080 -p stopped-upgrade-301553 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.292971080 -p stopped-upgrade-301553 stop: (14.831852134s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-301553 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-301553 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 : (1m27.166110896s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (164.98s)

                                                
                                    
x
+
TestPause/serial/Start (93.68s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-220068 --memory=3072 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-220068 --memory=3072 --install-addons=false --wait=all --driver=kvm2 : (1m33.67944842s)
--- PASS: TestPause/serial/Start (93.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-301553
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-301553: (2.336314939s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.34s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (87.95s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-220068 --alsologtostderr -v=1 --driver=kvm2 
E1217 20:16:32.943929  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/skaffold-444067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-220068 --alsologtostderr -v=1 --driver=kvm2 : (1m27.925307421s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (87.95s)

                                                
                                    
x
+
TestPause/serial/Pause (1.11s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-220068 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-220068 --alsologtostderr -v=5: (1.109410343s)
--- PASS: TestPause/serial/Pause (1.11s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-220068 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-220068 --output=json --layout=cluster: exit status 2 (249.127714ms)

                                                
                                                
-- stdout --
	{"Name":"pause-220068","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-220068","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-220068 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.97s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-220068 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.97s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.13s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-220068 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-220068 --alsologtostderr -v=5: (1.129447518s)
--- PASS: TestPause/serial/DeletePaused (1.13s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (13.57s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1217 20:18:02.087356  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:02.512934  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/gvisor-811570/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:02.519410  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/gvisor-811570/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:02.530916  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/gvisor-811570/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:02.552399  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/gvisor-811570/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:02.593887  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/gvisor-811570/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:02.675443  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/gvisor-811570/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:02.837022  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/gvisor-811570/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:03.158756  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/gvisor-811570/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:03.801082  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/gvisor-811570/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:05.082682  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/gvisor-811570/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (13.570561393s)
--- PASS: TestPause/serial/VerifyDeletedResources (13.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-888226 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2 
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-888226 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2 : exit status 14 (86.319354ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-888226] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-255930/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-255930/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (48.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-888226 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-888226 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (48.248993328s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-888226 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (48.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (80.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-604702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E1217 20:18:12.766019  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/gvisor-811570/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:23.008461  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/gvisor-811570/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:35.828669  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/skaffold-444067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:36.639823  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:43.490386  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/gvisor-811570/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:18:53.567894  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/addons-743931/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-604702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m20.104934559s)
--- PASS: TestNetworkPlugins/group/auto/Start (80.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (15.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-888226 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-888226 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (14.747869697s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-888226 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-888226 status -o json: exit status 2 (242.869401ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-888226","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-888226
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (15.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (24.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-888226 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-888226 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (24.482688939s)
--- PASS: TestNoKubernetes/serial/Start (24.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (86.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-604702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
E1217 20:19:24.451782  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/gvisor-811570/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-604702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m26.367119186s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (86.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-604702 "pgrep -a kubelet"
I1217 20:19:29.268045  259985 config.go:182] Loaded profile config "auto-604702": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-604702 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-2rqnb" [c8c41626-95cb-4df3-9b42-0bde9df495c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-2rqnb" [c8c41626-95cb-4df3-9b42-0bde9df495c2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004539065s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22186-255930/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-888226 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-888226 "sudo systemctl is-active --quiet service kubelet": exit status 1 (180.218029ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-888226
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-888226: (1.370241971s)
--- PASS: TestNoKubernetes/serial/Stop (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (33.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-888226 --driver=kvm2 
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-888226 --driver=kvm2 : (33.226053333s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (33.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-604702 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-604702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-604702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (113.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-604702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-604702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m53.47183971s)
--- PASS: TestNetworkPlugins/group/calico/Start (113.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-888226 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-888226 "sudo systemctl is-active --quiet service kubelet": exit status 1 (185.486087ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (88.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-604702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-604702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m28.941628302s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (88.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-ks5kp" [0cb8923b-3223-4c70-ad33-18b8aee1ba8c] Running
E1217 20:20:46.373422  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/gvisor-811570/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0059166s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-604702 "pgrep -a kubelet"
I1217 20:20:49.141765  259985 config.go:182] Loaded profile config "kindnet-604702": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-604702 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-jm8gk" [45e15006-6f29-432e-b6b2-7fa29542e043] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1217 20:20:51.964522  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/skaffold-444067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-jm8gk" [45e15006-6f29-432e-b6b2-7fa29542e043] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004768371s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-604702 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-604702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-604702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (64.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-604702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-604702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m4.730366738s)
--- PASS: TestNetworkPlugins/group/false/Start (64.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (90.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-604702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E1217 20:21:19.670654  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/skaffold-444067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-604702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m30.512998022s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (90.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-604702 "pgrep -a kubelet"
I1217 20:21:41.738459  259985 config.go:182] Loaded profile config "custom-flannel-604702": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-604702 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-tp4mx" [cc6b28f1-b8db-4103-9728-bcaa8552f9a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-tp4mx" [cc6b28f1-b8db-4103-9728-bcaa8552f9a6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.006222822s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-8k2r7" [1d78f2d6-3a97-4dba-8ee0-3dae1d6a7f87] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005712794s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-604702 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-604702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-604702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-604702 "pgrep -a kubelet"
I1217 20:21:55.683154  259985 config.go:182] Loaded profile config "calico-604702": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-604702 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-hb5hh" [c2aa16ce-3327-4ac3-bdfb-36dcae34d28c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-hb5hh" [c2aa16ce-3327-4ac3-bdfb-36dcae34d28c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.006769037s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-604702 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-604702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-604702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (68.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-604702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-604702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m8.274830579s)
--- PASS: TestNetworkPlugins/group/flannel/Start (68.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-604702 "pgrep -a kubelet"
I1217 20:22:23.524040  259985 config.go:182] Loaded profile config "false-604702": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-604702 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-fdzs7" [61a44c39-58a7-41c4-bc3e-701b1c737e84] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-fdzs7" [61a44c39-58a7-41c4-bc3e-701b1c737e84] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.006002998s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (73.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-604702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-604702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m13.567803376s)
--- PASS: TestNetworkPlugins/group/bridge/Start (73.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-604702 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-604702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-604702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-604702 "pgrep -a kubelet"
I1217 20:22:49.338100  259985 config.go:182] Loaded profile config "enable-default-cni-604702": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-604702 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-8rwhl" [b974e61a-f93e-482e-8b1d-04d30d750aca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-8rwhl" [b974e61a-f93e-482e-8b1d-04d30d750aca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.006945231s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (101.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-604702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-604702 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m41.905228107s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (101.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-604702 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-604702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-604702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (86.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-257144 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-257144 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0: (1m26.013735965s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (86.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-jpfrc" [81fa3e6c-d1ca-4597-a548-fc09a2c2b10e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005114869s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-604702 "pgrep -a kubelet"
I1217 20:23:26.797363  259985 config.go:182] Loaded profile config "flannel-604702": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-604702 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-w7mrm" [4dfd20ad-1179-4dd4-86da-2342262cc6b5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1217 20:23:30.215201  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/gvisor-811570/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-w7mrm" [4dfd20ad-1179-4dd4-86da-2342262cc6b5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.005288361s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-604702 "pgrep -a kubelet"
I1217 20:23:38.876242  259985 config.go:182] Loaded profile config "bridge-604702": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-604702 replace --force -f testdata/netcat-deployment.yaml
I1217 20:23:39.276629  259985 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-hnw46" [dc69cca9-aee8-4b4b-8f2c-fbedf31bdc39] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-hnw46" [dc69cca9-aee8-4b4b-8f2c-fbedf31bdc39] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005476517s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-604702 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-604702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-604702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-604702 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-604702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-604702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (71.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-314891 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-314891 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.35.0-rc.1: (1m11.299633929s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (71.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (88.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-464478 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.3
E1217 20:24:29.495790  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/auto-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:24:29.502405  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/auto-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:24:29.514488  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/auto-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:24:29.535777  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/auto-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:24:29.578114  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/auto-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:24:29.659960  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/auto-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:24:29.821571  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/auto-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:24:30.143843  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/auto-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:24:30.786241  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/auto-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:24:32.068097  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/auto-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:24:34.630299  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/auto-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-464478 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.3: (1m28.676321136s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (88.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-604702 "pgrep -a kubelet"
I1217 20:24:34.987322  259985 config.go:182] Loaded profile config "kubenet-604702": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-604702 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-dg26p" [b141cab8-c352-4f82-b0f5-af9da2c91e9b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1217 20:24:39.751765  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/auto-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-dg26p" [b141cab8-c352-4f82-b0f5-af9da2c91e9b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.006033351s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-257144 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [76815dc9-3efb-457c-859e-66fd124dec61] Pending
helpers_test.go:353: "busybox" [76815dc9-3efb-457c-859e-66fd124dec61] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [76815dc9-3efb-457c-859e-66fd124dec61] Running
E1217 20:24:53.485413  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.037357062s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-257144 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-604702 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-604702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-604702 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-257144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-257144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.59981171s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-257144 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-257144 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-257144 --alsologtostderr -v=3: (13.548772418s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-867080 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.3
E1217 20:25:10.407866  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-750489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:10.476103  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/auto-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-867080 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.3: (1m3.046419505s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-314891 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [478bdb9e-9c07-4d44-9607-d196683d8461] Pending
helpers_test.go:353: "busybox" [478bdb9e-9c07-4d44-9607-d196683d8461] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [478bdb9e-9c07-4d44-9607-d196683d8461] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.006003786s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-314891 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-257144 -n old-k8s-version-257144
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-257144 -n old-k8s-version-257144: exit status 7 (88.033188ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-257144 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (67.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-257144 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-257144 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0: (1m7.414300244s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-257144 -n old-k8s-version-257144
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (67.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-314891 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-314891 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (14.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-314891 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-314891 --alsologtostderr -v=3: (14.63077022s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (14.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-314891 -n no-preload-314891
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-314891 -n no-preload-314891: exit status 7 (81.696541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-314891 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (52.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-314891 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-314891 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.35.0-rc.1: (51.925712724s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-314891 -n no-preload-314891
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (52.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-464478 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [c8275d1c-d1f5-4553-a479-ce866dd323f8] Pending
helpers_test.go:353: "busybox" [c8275d1c-d1f5-4553-a479-ce866dd323f8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [c8275d1c-d1f5-4553-a479-ce866dd323f8] Running
E1217 20:25:42.927738  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/kindnet-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:42.934236  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/kindnet-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:42.945710  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/kindnet-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:42.967244  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/kindnet-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:43.009212  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/kindnet-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:43.090785  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/kindnet-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:43.252485  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/kindnet-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:43.573846  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/kindnet-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:44.215188  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/kindnet-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005306402s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-464478 exec busybox -- /bin/sh -c "ulimit -n"
E1217 20:25:45.497581  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/kindnet-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-464478 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-464478 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.163311556s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-464478 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-464478 --alsologtostderr -v=3
E1217 20:25:48.059634  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/kindnet-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:51.438096  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/auto-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:51.964962  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/skaffold-444067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:25:53.181754  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/kindnet-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-464478 --alsologtostderr -v=3: (13.274695162s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-464478 -n embed-certs-464478
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-464478 -n embed-certs-464478: exit status 7 (86.536134ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-464478 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (55.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-464478 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.3
E1217 20:26:03.423211  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/kindnet-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-464478 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.3: (55.087741261s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-464478 -n embed-certs-464478
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (55.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-867080 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [94c85624-fda4-470f-945a-f4afa2c561c4] Pending
helpers_test.go:353: "busybox" [94c85624-fda4-470f-945a-f4afa2c561c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [94c85624-fda4-470f-945a-f4afa2c561c4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.006965569s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-867080 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-867080 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-867080 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.317486871s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-867080 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-867080 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-867080 --alsologtostderr -v=3: (12.332437076s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-4t4d6" [9a3e113e-214a-43e2-bc21-6ba003f04cec] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1217 20:26:23.904782  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/kindnet-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-4t4d6" [9a3e113e-214a-43e2-bc21-6ba003f04cec] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.0052791s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-glxbn" [2de60846-ba8e-4092-92fe-9699d2649c12] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-glxbn" [2de60846-ba8e-4092-92fe-9699d2649c12] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.005894497s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-867080 -n default-k8s-diff-port-867080
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-867080 -n default-k8s-diff-port-867080: exit status 7 (93.654966ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-867080 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-867080 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-867080 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.3: (46.474418502s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-867080 -n default-k8s-diff-port-867080
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-4t4d6" [9a3e113e-214a-43e2-bc21-6ba003f04cec] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005009547s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-257144 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-257144 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-257144 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-257144 -n old-k8s-version-257144
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-257144 -n old-k8s-version-257144: exit status 2 (231.043684ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-257144 -n old-k8s-version-257144
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-257144 -n old-k8s-version-257144: exit status 2 (252.29866ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-257144 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-257144 -n old-k8s-version-257144
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-257144 -n old-k8s-version-257144
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-glxbn" [2de60846-ba8e-4092-92fe-9699d2649c12] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005553743s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-314891 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (57.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-155726 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0-rc.1
E1217 20:26:41.989210  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/custom-flannel-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:26:41.995670  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/custom-flannel-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:26:42.007191  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/custom-flannel-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:26:42.029252  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/custom-flannel-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:26:42.070762  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/custom-flannel-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:26:42.152572  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/custom-flannel-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:26:42.314164  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/custom-flannel-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:26:42.636055  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/custom-flannel-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:26:43.278020  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/custom-flannel-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-155726 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0-rc.1: (57.454477828s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (57.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-314891 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-314891 --alsologtostderr -v=1
E1217 20:26:44.559424  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/custom-flannel-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-314891 -n no-preload-314891
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-314891 -n no-preload-314891: exit status 2 (234.903497ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-314891 -n no-preload-314891
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-314891 -n no-preload-314891: exit status 2 (223.773586ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-314891 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-314891 -n no-preload-314891
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-314891 -n no-preload-314891
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-vhcbq" [e3e6ad24-25c1-4193-ae5a-50597beacba2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-vhcbq" [e3e6ad24-25c1-4193-ae5a-50597beacba2] Running
E1217 20:26:59.729041  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/calico-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:27:02.484338  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/custom-flannel-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.004591017s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-vhcbq" [e3e6ad24-25c1-4193-ae5a-50597beacba2] Running
E1217 20:27:04.866982  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/kindnet-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.007310634s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-464478 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-464478 image list --format=json
E1217 20:27:09.970861  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/calico-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-464478 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-464478 -n embed-certs-464478
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-464478 -n embed-certs-464478: exit status 2 (293.676857ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-464478 -n embed-certs-464478
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-464478 -n embed-certs-464478: exit status 2 (299.457659ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-464478 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-464478 -n embed-certs-464478
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-464478 -n embed-certs-464478
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-k7n9v" [5be9e211-0075-44d5-bbb2-b425f8219eff] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-k7n9v" [5be9e211-0075-44d5-bbb2-b425f8219eff] Running
E1217 20:27:22.966557  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/custom-flannel-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:27:23.860778  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/false-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:27:23.867308  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/false-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:27:23.878798  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/false-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:27:23.900261  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/false-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:27:23.941717  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/false-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:27:24.023228  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/false-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:27:24.184821  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/false-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:27:24.506639  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/false-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:27:25.148624  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/false-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:27:26.430179  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/false-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.005297003s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-k7n9v" [5be9e211-0075-44d5-bbb2-b425f8219eff] Running
E1217 20:27:28.992406  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/false-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:27:30.453302  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/calico-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004452451s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-867080 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-867080 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-867080 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-867080 -n default-k8s-diff-port-867080
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-867080 -n default-k8s-diff-port-867080: exit status 2 (237.045836ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-867080 -n default-k8s-diff-port-867080
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-867080 -n default-k8s-diff-port-867080: exit status 2 (229.779268ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-867080 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-867080 -n default-k8s-diff-port-867080
E1217 20:27:34.113930  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/false-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-867080 -n default-k8s-diff-port-867080
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-155726 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (6.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-155726 --alsologtostderr -v=3
E1217 20:27:44.356227  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/false-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-155726 --alsologtostderr -v=3: (6.847680435s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (6.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-155726 -n newest-cni-155726
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-155726 -n newest-cni-155726: exit status 7 (67.582148ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-155726 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-155726 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0-rc.1
E1217 20:27:49.625608  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/enable-default-cni-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:27:49.632017  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/enable-default-cni-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:27:49.643447  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/enable-default-cni-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:27:49.664944  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/enable-default-cni-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:27:49.706471  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/enable-default-cni-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:27:49.787945  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/enable-default-cni-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:27:49.949579  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/enable-default-cni-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:27:50.271405  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/enable-default-cni-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:27:50.912961  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/enable-default-cni-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:27:52.194673  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/enable-default-cni-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:27:54.756705  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/enable-default-cni-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:27:59.878749  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/enable-default-cni-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:28:02.087178  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/functional-240388/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:28:02.513892  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/gvisor-811570/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:28:03.928426  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/custom-flannel-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:28:04.837929  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/false-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:28:10.120294  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/enable-default-cni-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:28:11.415501  259985 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-255930/.minikube/profiles/calico-604702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-155726 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0-rc.1: (29.564837731s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-155726 -n newest-cni-155726
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-155726 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-155726 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-155726 -n newest-cni-155726
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-155726 -n newest-cni-155726: exit status 2 (232.361997ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-155726 -n newest-cni-155726
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-155726 -n newest-cni-155726: exit status 2 (232.623171ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-155726 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-155726 -n newest-cni-155726
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-155726 -n newest-cni-155726
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.62s)

                                                
                                    

Test skip (45/452)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.3/cached-images 0
15 TestDownloadOnly/v1.34.3/binaries 0
16 TestDownloadOnly/v1.34.3/kubectl 0
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
29 TestDownloadOnlyKic 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
131 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
132 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
133 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
135 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
136 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
137 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
231 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
232 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0.01
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService 0.01
234 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 0.01
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
236 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.01
289 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
317 TestKicCustomNetwork 0
318 TestKicExistingNetwork 0
319 TestKicCustomSubnet 0
320 TestKicStaticIP 0
352 TestChangeNoneUser 0
355 TestScheduledStopWindows 0
359 TestInsufficientStorage 0
363 TestMissingContainerUpgrade 0
375 TestNetworkPlugins/group/cilium 4.32
398 TestStartStop/group/disable-driver-mounts 0.19
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-604702 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-604702

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-604702

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-604702

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-604702

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-604702

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-604702

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-604702

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-604702

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-604702

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-604702

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-604702

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-604702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-604702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-604702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-604702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-604702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-604702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-604702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-604702" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-604702

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-604702

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-604702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-604702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-604702

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-604702

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-604702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-604702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-604702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-604702" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-604702" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-604702

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-604702" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-604702"

                                                
                                                
----------------------- debugLogs end: cilium-604702 [took: 4.131508984s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-604702" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-604702
--- SKIP: TestNetworkPlugins/group/cilium (4.32s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-709467" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-709467
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard