Test Report: KVM_Linux 22122

                    
                      022dd2780ab8206ac68153a1ee37fdbcc6da7ccd:2025-12-13:42761
                    
                

Test fail (1/452)

Order failed test Duration
526 TestStartStop/group/no-preload/serial/Pause 40.36
x
+
TestStartStop/group/no-preload/serial/Pause (40.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-480987 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-480987 --alsologtostderr -v=1: (1.763339548s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-480987 -n no-preload-480987
E1213 14:15:41.491103   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/auto-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:42.535885   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/false-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:45.848569   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-480987 -n no-preload-480987: exit status 2 (15.950136938s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-480987 -n no-preload-480987
E1213 14:15:51.086360   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/enable-default-cni-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:53.648077   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/enable-default-cni-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:55.594591   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/custom-flannel-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:58.769666   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/enable-default-cni-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-480987 -n no-preload-480987: exit status 2 (15.782450512s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-480987 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-480987 --alsologtostderr -v=1: (1.000034805s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-480987 -n no-preload-480987
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-480987 -n no-preload-480987
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-480987 -n no-preload-480987
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-480987 logs -n 25
E1213 14:16:09.011357   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/enable-default-cni-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-480987 logs -n 25: (1.793498316s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ guest-719825 ssh which VBoxControl                                                                                                                                                                                        │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh which wget                                                                                                                                                                                               │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh which socat                                                                                                                                                                                              │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh which git                                                                                                                                                                                                │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh which podman                                                                                                                                                                                             │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh which iptables                                                                                                                                                                                           │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh which docker                                                                                                                                                                                             │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh which curl                                                                                                                                                                                               │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh df -t ext4 /data | grep /data                                                                                                                                                                            │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh df -t ext4 /var/lib/minikube | grep /var/lib/minikube                                                                                                                                                    │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker                                                                                                                                              │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox                                                                                                                                                      │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh df -t ext4 /var/lib/cni | grep /var/lib/cni                                                                                                                                                              │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet                                                                                                                                                      │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh df -t ext4 /var/lib/docker | grep /var/lib/docker                                                                                                                                                        │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh cat /version.json                                                                                                                                                                                        │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'                                                                                                                                         │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ delete  │ -p guest-719825                                                                                                                                                                                                           │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ image   │ no-preload-480987 image list --format=json                                                                                                                                                                                │ no-preload-480987 │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ pause   │ -p no-preload-480987 --alsologtostderr -v=1                                                                                                                                                                               │ no-preload-480987 │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ addons  │ enable metrics-server -p newest-cni-994510 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                   │ newest-cni-994510 │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ stop    │ -p newest-cni-994510 --alsologtostderr -v=3                                                                                                                                                                               │ newest-cni-994510 │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:16 UTC │
	│ addons  │ enable dashboard -p newest-cni-994510 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                              │ newest-cni-994510 │ jenkins │ v1.37.0 │ 13 Dec 25 14:16 UTC │ 13 Dec 25 14:16 UTC │
	│ start   │ -p newest-cni-994510 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0-beta.0 │ newest-cni-994510 │ jenkins │ v1.37.0 │ 13 Dec 25 14:16 UTC │                     │
	│ unpause │ -p no-preload-480987 --alsologtostderr -v=1                                                                                                                                                                               │ no-preload-480987 │ jenkins │ v1.37.0 │ 13 Dec 25 14:16 UTC │ 13 Dec 25 14:16 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 14:16:01
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 14:16:01.125524   65660 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:16:01.125796   65660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:16:01.125808   65660 out.go:374] Setting ErrFile to fd 2...
	I1213 14:16:01.125813   65660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:16:01.126005   65660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
	I1213 14:16:01.126504   65660 out.go:368] Setting JSON to false
	I1213 14:16:01.127470   65660 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7112,"bootTime":1765628249,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 14:16:01.127542   65660 start.go:143] virtualization: kvm guest
	I1213 14:16:01.130102   65660 out.go:179] * [newest-cni-994510] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 14:16:01.131794   65660 notify.go:221] Checking for updates...
	I1213 14:16:01.131884   65660 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 14:16:01.133773   65660 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:16:01.135572   65660 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-16298/kubeconfig
	I1213 14:16:01.137334   65660 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-16298/.minikube
	I1213 14:16:01.138729   65660 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 14:16:01.140547   65660 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 14:16:01.142283   65660 config.go:182] Loaded profile config "newest-cni-994510": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 14:16:01.142955   65660 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:16:01.181268   65660 out.go:179] * Using the kvm2 driver based on existing profile
	I1213 14:16:01.182751   65660 start.go:309] selected driver: kvm2
	I1213 14:16:01.182778   65660 start.go:927] validating driver "kvm2" against &{Name:newest-cni-994510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:newest-cni-994510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Star
tHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:16:01.182906   65660 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 14:16:01.183932   65660 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 14:16:01.183971   65660 cni.go:84] Creating CNI manager for ""
	I1213 14:16:01.184040   65660 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 14:16:01.184077   65660 start.go:353] cluster config:
	{Name:newest-cni-994510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-994510 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:16:01.184172   65660 iso.go:125] acquiring lock: {Name:mkdb244ed0b6c01d7604ff94d6687c3511cb9170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 14:16:01.186634   65660 out.go:179] * Starting "newest-cni-994510" primary control-plane node in "newest-cni-994510" cluster
	I1213 14:16:01.188000   65660 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 14:16:01.188043   65660 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-16298/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1213 14:16:01.188051   65660 cache.go:65] Caching tarball of preloaded images
	I1213 14:16:01.188175   65660 preload.go:238] Found /home/jenkins/minikube-integration/22122-16298/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 14:16:01.188192   65660 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1213 14:16:01.188372   65660 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/newest-cni-994510/config.json ...
	I1213 14:16:01.188674   65660 start.go:360] acquireMachinesLock for newest-cni-994510: {Name:mkb4e7ea4da4358e2127ad51f1ac2815f0b79c60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 14:16:01.188729   65660 start.go:364] duration metric: took 30.792µs to acquireMachinesLock for "newest-cni-994510"
	I1213 14:16:01.188745   65660 start.go:96] Skipping create...Using existing machine configuration
	I1213 14:16:01.188750   65660 fix.go:54] fixHost starting: 
	I1213 14:16:01.191035   65660 fix.go:112] recreateIfNeeded on newest-cni-994510: state=Stopped err=<nil>
	W1213 14:16:01.191077   65660 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 14:15:59.453385   64658 api_server.go:269] stopped: https://192.168.61.21:8444/healthz: Get "https://192.168.61.21:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 14:15:59.453464   64658 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8444/healthz ...
	I1213 14:16:01.193182   65660 out.go:252] * Restarting existing kvm2 VM for "newest-cni-994510" ...
	I1213 14:16:01.193250   65660 main.go:143] libmachine: starting domain...
	I1213 14:16:01.193262   65660 main.go:143] libmachine: ensuring networks are active...
	I1213 14:16:01.194575   65660 main.go:143] libmachine: Ensuring network default is active
	I1213 14:16:01.195131   65660 main.go:143] libmachine: Ensuring network mk-newest-cni-994510 is active
	I1213 14:16:01.195757   65660 main.go:143] libmachine: getting domain XML...
	I1213 14:16:01.197197   65660 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>newest-cni-994510</name>
	  <uuid>30fbdf00-43d2-4fb6-8630-f0db2bc365e5</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22122-16298/.minikube/machines/newest-cni-994510/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22122-16298/.minikube/machines/newest-cni-994510/newest-cni-994510.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:44:4a:b4'/>
	      <source network='mk-newest-cni-994510'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:5a:df:a1'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1213 14:16:02.595304   65660 main.go:143] libmachine: waiting for domain to start...
	I1213 14:16:02.596894   65660 main.go:143] libmachine: domain is now running
	I1213 14:16:02.596945   65660 main.go:143] libmachine: waiting for IP...
	I1213 14:16:02.597844   65660 main.go:143] libmachine: domain newest-cni-994510 has defined MAC address 52:54:00:44:4a:b4 in network mk-newest-cni-994510
	I1213 14:16:02.598831   65660 main.go:143] libmachine: domain newest-cni-994510 has current primary IP address 192.168.72.114 and MAC address 52:54:00:44:4a:b4 in network mk-newest-cni-994510
	I1213 14:16:02.598852   65660 main.go:143] libmachine: found domain IP: 192.168.72.114
	I1213 14:16:02.598859   65660 main.go:143] libmachine: reserving static IP address...
	I1213 14:16:02.599517   65660 main.go:143] libmachine: found host DHCP lease matching {name: "newest-cni-994510", mac: "52:54:00:44:4a:b4", ip: "192.168.72.114"} in network mk-newest-cni-994510: {Iface:virbr4 ExpiryTime:2025-12-13 15:15:09 +0000 UTC Type:0 Mac:52:54:00:44:4a:b4 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:newest-cni-994510 Clientid:01:52:54:00:44:4a:b4}
	I1213 14:16:02.599551   65660 main.go:143] libmachine: skip adding static IP to network mk-newest-cni-994510 - found existing host DHCP lease matching {name: "newest-cni-994510", mac: "52:54:00:44:4a:b4", ip: "192.168.72.114"}
	I1213 14:16:02.599560   65660 main.go:143] libmachine: reserved static IP address 192.168.72.114 for domain newest-cni-994510
	I1213 14:16:02.599566   65660 main.go:143] libmachine: waiting for SSH...
	I1213 14:16:02.599571   65660 main.go:143] libmachine: Getting to WaitForSSH function...
	I1213 14:16:02.602167   65660 main.go:143] libmachine: domain newest-cni-994510 has defined MAC address 52:54:00:44:4a:b4 in network mk-newest-cni-994510
	I1213 14:16:02.602671   65660 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:44:4a:b4", ip: ""} in network mk-newest-cni-994510: {Iface:virbr4 ExpiryTime:2025-12-13 15:15:09 +0000 UTC Type:0 Mac:52:54:00:44:4a:b4 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:newest-cni-994510 Clientid:01:52:54:00:44:4a:b4}
	I1213 14:16:02.602700   65660 main.go:143] libmachine: domain newest-cni-994510 has defined IP address 192.168.72.114 and MAC address 52:54:00:44:4a:b4 in network mk-newest-cni-994510
	I1213 14:16:02.602916   65660 main.go:143] libmachine: Using SSH client type: native
	I1213 14:16:02.603157   65660 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I1213 14:16:02.603168   65660 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1213 14:16:05.663680   65660 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I1213 14:16:04.456629   64658 api_server.go:269] stopped: https://192.168.61.21:8444/healthz: Get "https://192.168.61.21:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 14:16:04.456680   64658 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8444/healthz ...
	I1213 14:16:05.791507   64658 api_server.go:269] stopped: https://192.168.61.21:8444/healthz: Get "https://192.168.61.21:8444/healthz": read tcp 192.168.61.1:54372->192.168.61.21:8444: read: connection reset by peer
	I1213 14:16:05.791551   64658 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8444/healthz ...
	I1213 14:16:05.792084   64658 api_server.go:269] stopped: https://192.168.61.21:8444/healthz: Get "https://192.168.61.21:8444/healthz": dial tcp 192.168.61.21:8444: connect: connection refused
	I1213 14:16:05.944530   64658 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8444/healthz ...
	I1213 14:16:05.945425   64658 api_server.go:269] stopped: https://192.168.61.21:8444/healthz: Get "https://192.168.61.21:8444/healthz": dial tcp 192.168.61.21:8444: connect: connection refused
	
	
	==> Docker <==
	Dec 13 14:15:20 no-preload-480987 dockerd[1186]: time="2025-12-13T14:15:20.200344652Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 14:15:20 no-preload-480987 dockerd[1186]: time="2025-12-13T14:15:20.200485128Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Dec 13 14:15:20 no-preload-480987 cri-dockerd[1566]: time="2025-12-13T14:15:20Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Dec 13 14:15:20 no-preload-480987 dockerd[1186]: time="2025-12-13T14:15:20.237676952Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 14:15:20 no-preload-480987 dockerd[1186]: time="2025-12-13T14:15:20.238207133Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 14:15:20 no-preload-480987 dockerd[1186]: time="2025-12-13T14:15:20.246967518Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Dec 13 14:15:20 no-preload-480987 dockerd[1186]: time="2025-12-13T14:15:20.247009573Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 14:15:20 no-preload-480987 dockerd[1186]: time="2025-12-13T14:15:20.978311358Z" level=info msg="ignoring event" container=c522abf03bd68d5546f765f4b5f89231a556fd352bdc3bf6c742a5b152ef313f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 14:15:21 no-preload-480987 cri-dockerd[1566]: time="2025-12-13T14:15:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c1f03d7fa4950bf1999afa71cea62fd1bcf1d2684c789709041868d8f710fc0e/resolv.conf as [nameserver 10.96.0.10 search kube-system.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 13 14:15:32 no-preload-480987 dockerd[1186]: time="2025-12-13T14:15:32.339669699Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 14:15:32 no-preload-480987 dockerd[1186]: time="2025-12-13T14:15:32.408770252Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 14:15:32 no-preload-480987 dockerd[1186]: time="2025-12-13T14:15:32.408895320Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Dec 13 14:15:32 no-preload-480987 cri-dockerd[1566]: time="2025-12-13T14:15:32Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Dec 13 14:15:33 no-preload-480987 dockerd[1186]: time="2025-12-13T14:15:33.760728266Z" level=error msg="Handler for POST /v1.51/containers/7731d9ba696b/pause returned error: cannot pause container 7731d9ba696bc48dd0037f538a0957012f30009a9e05e971c946977be10ff36b: OCI runtime pause failed: container not running"
	Dec 13 14:15:33 no-preload-480987 dockerd[1186]: time="2025-12-13T14:15:33.837874481Z" level=info msg="ignoring event" container=7731d9ba696bc48dd0037f538a0957012f30009a9e05e971c946977be10ff36b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 14:16:08 no-preload-480987 cri-dockerd[1566]: time="2025-12-13T14:16:08Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Dec 13 14:16:08 no-preload-480987 cri-dockerd[1566]: time="2025-12-13T14:16:08Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-9278n_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"df1ae620e7830da08877464b409a1a379127a6f2a24e16d49affeaf5da36304b\""
	Dec 13 14:16:08 no-preload-480987 dockerd[1186]: time="2025-12-13T14:16:08.908764997Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 14:16:08 no-preload-480987 dockerd[1186]: time="2025-12-13T14:16:08.908814325Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 14:16:08 no-preload-480987 dockerd[1186]: time="2025-12-13T14:16:08.920308371Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Dec 13 14:16:08 no-preload-480987 dockerd[1186]: time="2025-12-13T14:16:08.920350681Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 14:16:09 no-preload-480987 dockerd[1186]: time="2025-12-13T14:16:09.044310286Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 14:16:09 no-preload-480987 dockerd[1186]: time="2025-12-13T14:16:09.143181834Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 14:16:09 no-preload-480987 dockerd[1186]: time="2025-12-13T14:16:09.143362360Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Dec 13 14:16:09 no-preload-480987 cri-dockerd[1566]: time="2025-12-13T14:16:09Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	cc149c15604ed       6e38f40d628db                                                                                         1 second ago         Running             storage-provisioner       2                   7fe73cfac55b5       storage-provisioner                         kube-system
	c87ce8eecf3dc       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        50 seconds ago       Running             kubernetes-dashboard      0                   e4c80e4356825       kubernetes-dashboard-b84665fb8-qgkp8        kubernetes-dashboard
	12db3d62fa358       56cc512116c8f                                                                                         58 seconds ago       Running             busybox                   1                   c4d19dba95faf       busybox                                     default
	df6bc06c07314       aa5e3ebc0dfed                                                                                         59 seconds ago       Running             coredns                   1                   42e2df8bc0c2a       coredns-7d764666f9-vqfqb                    kube-system
	d56ac35f2023e       8a4ded35a3eb1                                                                                         About a minute ago   Running             kube-proxy                1                   4df6888cada75       kube-proxy-bcqzf                            kube-system
	7731d9ba696bc       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   7fe73cfac55b5       storage-provisioner                         kube-system
	bb9406d173c82       7bb6219ddab95                                                                                         About a minute ago   Running             kube-scheduler            1                   598ae50e4090f       kube-scheduler-no-preload-480987            kube-system
	abc673268b8c4       a3e246e9556e9                                                                                         About a minute ago   Running             etcd                      1                   f25680d6231bd       etcd-no-preload-480987                      kube-system
	f15386049dc5d       45f3cc72d235f                                                                                         About a minute ago   Running             kube-controller-manager   1                   7c3c0ac1e767d       kube-controller-manager-no-preload-480987   kube-system
	c04badbd06c59       aa9d02839d8de                                                                                         About a minute ago   Running             kube-apiserver            1                   894e50d9bbd2f       kube-apiserver-no-preload-480987            kube-system
	a753bda60e00b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   2 minutes ago        Exited              busybox                   0                   3efacce8eff61       busybox                                     default
	a83817d1e3a19       aa5e3ebc0dfed                                                                                         2 minutes ago        Exited              coredns                   0                   bbeedeba027f5       coredns-7d764666f9-vqfqb                    kube-system
	825b5a74aef54       8a4ded35a3eb1                                                                                         2 minutes ago        Exited              kube-proxy                0                   58393cab0a018       kube-proxy-bcqzf                            kube-system
	dbcd28d379e9d       7bb6219ddab95                                                                                         2 minutes ago        Exited              kube-scheduler            0                   3aeb2c8b83364       kube-scheduler-no-preload-480987            kube-system
	421c3cd800264       a3e246e9556e9                                                                                         2 minutes ago        Exited              etcd                      0                   f584e9b37f307       etcd-no-preload-480987                      kube-system
	0a4ff8bbd246b       45f3cc72d235f                                                                                         2 minutes ago        Exited              kube-controller-manager   0                   3a909272bcfee       kube-controller-manager-no-preload-480987   kube-system
	15efb3b314731       aa9d02839d8de                                                                                         2 minutes ago        Exited              kube-apiserver            0                   6cd8631e870c0       kube-apiserver-no-preload-480987            kube-system
	
	
	==> coredns [a83817d1e3a1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	[INFO] Reloading complete
	[INFO] 127.0.0.1:48197 - 36083 "HINFO IN 948520708112921410.8802066444027197549. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.08414206s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [df6bc06c0731] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:41339 - 43934 "HINFO IN 5178304912045032897.7220680391157509907. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.10370268s
	
	
	==> describe nodes <==
	Name:               no-preload-480987
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-480987
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=no-preload-480987
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T14_13_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 14:13:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-480987
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 14:16:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 14:16:08 +0000   Sat, 13 Dec 2025 14:13:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 14:16:08 +0000   Sat, 13 Dec 2025 14:13:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 14:16:08 +0000   Sat, 13 Dec 2025 14:13:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 14:16:08 +0000   Sat, 13 Dec 2025 14:15:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.249
	  Hostname:    no-preload-480987
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 a518b2b6861e4d398d1398567a956c88
	  System UUID:                a518b2b6-861e-4d39-8d13-98567a956c88
	  Boot ID:                    f2072675-ae25-45ab-b1ff-1c552f111941
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 coredns-7d764666f9-vqfqb                      100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     2m44s
	  kube-system                 etcd-no-preload-480987                        100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m49s
	  kube-system                 kube-apiserver-no-preload-480987              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m49s
	  kube-system                 kube-controller-manager-no-preload-480987     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m50s
	  kube-system                 kube-proxy-bcqzf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  kube-system                 kube-scheduler-no-preload-480987              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m49s
	  kube-system                 metrics-server-5d785b57d4-5xl42               100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         116s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m41s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-nkc9p    0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-qgkp8          0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  2m45s  node-controller  Node no-preload-480987 event: Registered Node no-preload-480987 in Controller
	  Normal  RegisteredNode  64s    node-controller  Node no-preload-480987 event: Registered Node no-preload-480987 in Controller
	
	
	==> dmesg <==
	[Dec13 14:14] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001357] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.010383] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.784672] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000030] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000003] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.155485] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.144137] kauditd_printk_skb: 393 callbacks suppressed
	[  +1.726112] kauditd_printk_skb: 161 callbacks suppressed
	[Dec13 14:15] kauditd_printk_skb: 110 callbacks suppressed
	[  +0.000056] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.837184] kauditd_printk_skb: 223 callbacks suppressed
	[  +0.228037] kauditd_printk_skb: 72 callbacks suppressed
	[Dec13 14:16] kauditd_printk_skb: 35 callbacks suppressed
	
	
	==> etcd [421c3cd80026] <==
	{"level":"warn","ts":"2025-12-13T14:13:15.360075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:13:15.366738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:13:15.382219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:13:15.388352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:13:15.481328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45230","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T14:13:30.479876Z","caller":"traceutil/trace.go:172","msg":"trace[1833513221] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"124.821045ms","start":"2025-12-13T14:13:30.354990Z","end":"2025-12-13T14:13:30.479811Z","steps":["trace[1833513221] 'process raft request'  (duration: 124.585013ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T14:13:30.758630Z","caller":"traceutil/trace.go:172","msg":"trace[2140602732] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"136.602392ms","start":"2025-12-13T14:13:30.622014Z","end":"2025-12-13T14:13:30.758616Z","steps":["trace[2140602732] 'process raft request'  (duration: 136.409305ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T14:14:15.200825Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-13T14:14:15.202393Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"no-preload-480987","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.249:2380"],"advertise-client-urls":["https://192.168.83.249:2379"]}
	{"level":"error","ts":"2025-12-13T14:14:15.202578Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T14:14:22.207006Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T14:14:22.210578Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T14:14:22.210910Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f03e5af8f7ea6d24","current-leader-member-id":"f03e5af8f7ea6d24"}
	{"level":"info","ts":"2025-12-13T14:14:22.211541Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-13T14:14:22.211817Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-13T14:14:22.214632Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T14:14:22.214878Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T14:14:22.214910Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-13T14:14:22.215259Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.249:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T14:14:22.215416Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.249:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T14:14:22.215558Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.249:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T14:14:22.218997Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.83.249:2380"}
	{"level":"error","ts":"2025-12-13T14:14:22.219273Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.249:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T14:14:22.219421Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.83.249:2380"}
	{"level":"info","ts":"2025-12-13T14:14:22.219571Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"no-preload-480987","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.249:2380"],"advertise-client-urls":["https://192.168.83.249:2379"]}
	
	
	==> etcd [abc673268b8c] <==
	{"level":"warn","ts":"2025-12-13T14:15:00.561226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.567555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.582549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.597405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.610812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.623256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.636624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.646981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.659299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.682561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.687891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.715178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.740572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.754560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.765201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.833533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49436","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T14:15:16.036041Z","caller":"traceutil/trace.go:172","msg":"trace[398394203] linearizableReadLoop","detail":"{readStateIndex:775; appliedIndex:775; }","duration":"193.985562ms","start":"2025-12-13T14:15:15.842027Z","end":"2025-12-13T14:15:16.036013Z","steps":["trace[398394203] 'read index received'  (duration: 193.980301ms)","trace[398394203] 'applied index is now lower than readState.Index'  (duration: 4.69µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T14:15:16.036309Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"194.210969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T14:15:16.036359Z","caller":"traceutil/trace.go:172","msg":"trace[742166215] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:727; }","duration":"194.326868ms","start":"2025-12-13T14:15:15.842021Z","end":"2025-12-13T14:15:16.036348Z","steps":["trace[742166215] 'agreement among raft nodes before linearized reading'  (duration: 194.179953ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T14:15:21.850284Z","caller":"traceutil/trace.go:172","msg":"trace[1856919751] transaction","detail":"{read_only:false; response_revision:752; number_of_response:1; }","duration":"119.966499ms","start":"2025-12-13T14:15:21.730293Z","end":"2025-12-13T14:15:21.850259Z","steps":["trace[1856919751] 'process raft request'  (duration: 119.771866ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T14:15:21.870555Z","caller":"traceutil/trace.go:172","msg":"trace[1096756970] transaction","detail":"{read_only:false; response_revision:753; number_of_response:1; }","duration":"139.221775ms","start":"2025-12-13T14:15:21.731316Z","end":"2025-12-13T14:15:21.870538Z","steps":["trace[1096756970] 'process raft request'  (duration: 139.104459ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T14:15:22.391408Z","caller":"traceutil/trace.go:172","msg":"trace[135788815] linearizableReadLoop","detail":"{readStateIndex:807; appliedIndex:807; }","duration":"118.341068ms","start":"2025-12-13T14:15:22.273045Z","end":"2025-12-13T14:15:22.391386Z","steps":["trace[135788815] 'read index received'  (duration: 118.33172ms)","trace[135788815] 'applied index is now lower than readState.Index'  (duration: 8.341µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T14:15:22.391569Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.496908ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T14:15:22.391603Z","caller":"traceutil/trace.go:172","msg":"trace[588260666] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:757; }","duration":"118.552652ms","start":"2025-12-13T14:15:22.273037Z","end":"2025-12-13T14:15:22.391589Z","steps":["trace[588260666] 'agreement among raft nodes before linearized reading'  (duration: 118.470061ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T14:15:22.391585Z","caller":"traceutil/trace.go:172","msg":"trace[600189230] transaction","detail":"{read_only:false; response_revision:758; number_of_response:1; }","duration":"155.435049ms","start":"2025-12-13T14:15:22.236137Z","end":"2025-12-13T14:15:22.391572Z","steps":["trace[600189230] 'process raft request'  (duration: 155.304345ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:16:09 up 1 min,  0 users,  load average: 2.14, 0.81, 0.30
	Linux no-preload-480987 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Dec 13 11:18:23 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [15efb3b31473] <==
	W1213 14:14:24.445471       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.581281       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.594077       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.594172       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.618785       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.664786       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.697063       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.708234       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.756333       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.762851       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.774633       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.796479       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.805360       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.817094       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.822343       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.872137       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.890323       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.926833       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.966475       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.985546       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.988201       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.999081       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:25.021870       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:25.025487       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:25.154191       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [c04badbd06c5] <==
	E1213 14:15:02.854728       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1213 14:15:02.855393       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 14:15:03.482345       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.83.249]
	I1213 14:15:03.487471       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 14:15:04.195966       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 14:15:04.274505       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 14:15:04.337748       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 14:15:04.356516       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 14:15:05.251412       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 14:15:05.435965       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 14:15:07.692640       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 14:15:08.351748       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.118.59"}
	I1213 14:15:08.403429       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.234.215"}
	W1213 14:16:06.893461       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 14:16:06.893817       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1213 14:16:06.893851       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 14:16:06.962470       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 14:16:06.969263       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1213 14:16:06.969326       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [0a4ff8bbd246] <==
	I1213 14:13:24.171667       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.173371       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.173673       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.227517       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 14:13:24.171973       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.173119       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.173204       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.173288       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.173453       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.173836       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.174040       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.243849       1 range_allocator.go:177] "Sending events to api server"
	I1213 14:13:24.243896       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1213 14:13:24.243904       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 14:13:24.243916       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.174139       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.174232       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.174313       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.174392       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.288212       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-480987" podCIDRs=["10.244.0.0/24"]
	I1213 14:13:24.328300       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.372441       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.372523       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 14:13:24.372530       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1213 14:13:29.188585       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-controller-manager [f15386049dc5] <==
	I1213 14:15:05.123173       1 shared_informer.go:377] "Caches are synced"
	I1213 14:15:05.127814       1 shared_informer.go:377] "Caches are synced"
	I1213 14:15:05.145713       1 shared_informer.go:377] "Caches are synced"
	I1213 14:15:05.145767       1 shared_informer.go:377] "Caches are synced"
	I1213 14:15:05.145867       1 shared_informer.go:377] "Caches are synced"
	I1213 14:15:05.148926       1 shared_informer.go:377] "Caches are synced"
	I1213 14:15:05.151356       1 shared_informer.go:377] "Caches are synced"
	I1213 14:15:05.153402       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1213 14:15:05.131028       1 shared_informer.go:377] "Caches are synced"
	I1213 14:15:05.166930       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1213 14:15:05.168763       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-480987"
	I1213 14:15:05.168796       1 shared_informer.go:377] "Caches are synced"
	I1213 14:15:05.178980       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1213 14:15:05.275565       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 14:15:05.376144       1 shared_informer.go:377] "Caches are synced"
	I1213 14:15:05.376162       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 14:15:05.376168       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1213 14:15:05.377426       1 shared_informer.go:377] "Caches are synced"
	E1213 14:15:07.975850       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b\" failed with pods \"dashboard-metrics-scraper-867fb5f87b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 14:15:08.023416       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b\" failed with pods \"dashboard-metrics-scraper-867fb5f87b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 14:15:08.076776       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b\" failed with pods \"dashboard-metrics-scraper-867fb5f87b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 14:15:08.087381       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1213 14:15:10.180464       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	E1213 14:16:06.970751       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1213 14:16:06.998273       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [825b5a74aef5] <==
	I1213 14:13:27.434952       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 14:13:27.537239       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:27.537315       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.249"]
	E1213 14:13:27.541477       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 14:13:27.890996       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 14:13:27.891076       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 14:13:27.891101       1 server_linux.go:136] "Using iptables Proxier"
	I1213 14:13:28.046345       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 14:13:28.047596       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 14:13:28.047613       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 14:13:28.069007       1 config.go:200] "Starting service config controller"
	I1213 14:13:28.069351       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 14:13:28.069786       1 config.go:106] "Starting endpoint slice config controller"
	I1213 14:13:28.069797       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 14:13:28.084631       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 14:13:28.084652       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 14:13:28.092180       1 config.go:309] "Starting node config controller"
	I1213 14:13:28.092221       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 14:13:28.092229       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 14:13:28.172328       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 14:13:28.172494       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 14:13:28.185119       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [d56ac35f2023] <==
	I1213 14:15:04.179196       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 14:15:04.280524       1 shared_informer.go:377] "Caches are synced"
	I1213 14:15:04.282161       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.249"]
	E1213 14:15:04.282304       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 14:15:04.416551       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 14:15:04.416804       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 14:15:04.416966       1 server_linux.go:136] "Using iptables Proxier"
	I1213 14:15:04.483468       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 14:15:04.486426       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 14:15:04.486470       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 14:15:04.514733       1 config.go:200] "Starting service config controller"
	I1213 14:15:04.514829       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 14:15:04.514848       1 config.go:106] "Starting endpoint slice config controller"
	I1213 14:15:04.514852       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 14:15:04.514869       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 14:15:04.514873       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 14:15:04.531338       1 config.go:309] "Starting node config controller"
	I1213 14:15:04.547621       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 14:15:04.549356       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 14:15:04.619402       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 14:15:04.632403       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 14:15:04.632548       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [bb9406d173c8] <==
	I1213 14:14:59.768053       1 serving.go:386] Generated self-signed cert in-memory
	W1213 14:15:01.618693       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 14:15:01.618832       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 14:15:01.618857       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 14:15:01.619158       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 14:15:01.741589       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1213 14:15:01.741634       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 14:15:01.749900       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 14:15:01.755671       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 14:15:01.758170       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 14:15:01.758530       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 14:15:01.859577       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [dbcd28d379e9] <==
	E1213 14:13:17.759489       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1213 14:13:17.761454       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1213 14:13:17.807793       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1213 14:13:17.810508       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1213 14:13:17.828149       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1213 14:13:17.830273       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1213 14:13:17.838735       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1213 14:13:17.842088       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1213 14:13:17.864932       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1213 14:13:17.868183       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1213 14:13:17.872924       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1213 14:13:17.874635       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1213 14:13:17.963042       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1213 14:13:17.965851       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1213 14:13:17.991884       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1213 14:13:17.995477       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1213 14:13:18.019764       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1213 14:13:18.022894       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1213 14:13:18.028239       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1213 14:13:18.030500       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	I1213 14:13:19.979206       1 shared_informer.go:377] "Caches are synced"
	I1213 14:14:15.158169       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1213 14:14:15.161149       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1213 14:14:15.161158       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1213 14:14:15.161189       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: I1213 14:16:08.518425    4407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2ea632f31ec6ee33b64b33739a273e0-k8s-certs\") pod \"kube-controller-manager-no-preload-480987\" (UID: \"d2ea632f31ec6ee33b64b33739a273e0\") " pod="kube-system/kube-controller-manager-no-preload-480987"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: I1213 14:16:08.518449    4407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d2ea632f31ec6ee33b64b33739a273e0-kubeconfig\") pod \"kube-controller-manager-no-preload-480987\" (UID: \"d2ea632f31ec6ee33b64b33739a273e0\") " pod="kube-system/kube-controller-manager-no-preload-480987"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: I1213 14:16:08.553159    4407 apiserver.go:52] "Watching apiserver"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: I1213 14:16:08.619393    4407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04edfc90843076c87e63de2a69653f0a-k8s-certs\") pod \"kube-apiserver-no-preload-480987\" (UID: \"04edfc90843076c87e63de2a69653f0a\") " pod="kube-system/kube-apiserver-no-preload-480987"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: I1213 14:16:08.619768    4407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04edfc90843076c87e63de2a69653f0a-usr-share-ca-certificates\") pod \"kube-apiserver-no-preload-480987\" (UID: \"04edfc90843076c87e63de2a69653f0a\") " pod="kube-system/kube-apiserver-no-preload-480987"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: I1213 14:16:08.619803    4407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/6eb935396208fda222fa24605b775590-etcd-certs\") pod \"etcd-no-preload-480987\" (UID: \"6eb935396208fda222fa24605b775590\") " pod="kube-system/etcd-no-preload-480987"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: I1213 14:16:08.619860    4407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04edfc90843076c87e63de2a69653f0a-ca-certs\") pod \"kube-apiserver-no-preload-480987\" (UID: \"04edfc90843076c87e63de2a69653f0a\") " pod="kube-system/kube-apiserver-no-preload-480987"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: I1213 14:16:08.619874    4407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/6eb935396208fda222fa24605b775590-etcd-data\") pod \"etcd-no-preload-480987\" (UID: \"6eb935396208fda222fa24605b775590\") " pod="kube-system/etcd-no-preload-480987"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: I1213 14:16:08.660740    4407 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: I1213 14:16:08.720736    4407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fc17afa-ac4f-4744-b380-dff0bf9d5f12-lib-modules\") pod \"kube-proxy-bcqzf\" (UID: \"9fc17afa-ac4f-4744-b380-dff0bf9d5f12\") " pod="kube-system/kube-proxy-bcqzf"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: I1213 14:16:08.720913    4407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fc17afa-ac4f-4744-b380-dff0bf9d5f12-xtables-lock\") pod \"kube-proxy-bcqzf\" (UID: \"9fc17afa-ac4f-4744-b380-dff0bf9d5f12\") " pod="kube-system/kube-proxy-bcqzf"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: I1213 14:16:08.720958    4407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/45d578da-7a44-4ee1-8dd2-77c3d4816633-tmp\") pod \"storage-provisioner\" (UID: \"45d578da-7a44-4ee1-8dd2-77c3d4816633\") " pod="kube-system/storage-provisioner"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: I1213 14:16:08.871656    4407 scope.go:122] "RemoveContainer" containerID="7731d9ba696bc48dd0037f538a0957012f30009a9e05e971c946977be10ff36b"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: E1213 14:16:08.922718    4407 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: E1213 14:16:08.922803    4407 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: E1213 14:16:08.923293    4407 kuberuntime_manager.go:1664] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-5d785b57d4-5xl42_kube-system(8be1da2d-1636-4055-8e9d-5ff3844c3e45): ErrImagePull: Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" logger="UnhandledError"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: E1213 14:16:08.923332    4407 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-5xl42" podUID="8be1da2d-1636-4055-8e9d-5ff3844c3e45"
	Dec 13 14:16:09 no-preload-480987 kubelet[4407]: E1213 14:16:09.152638    4407 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Dec 13 14:16:09 no-preload-480987 kubelet[4407]: E1213 14:16:09.152706    4407 kuberuntime_image.go:43] "Failed to pull image" err="Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Dec 13 14:16:09 no-preload-480987 kubelet[4407]: E1213 14:16:09.153039    4407 kuberuntime_manager.go:1664] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-867fb5f87b-nkc9p_kubernetes-dashboard(9ed00e7f-fb97-46c1-bba9-05beb5234b7e): ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" logger="UnhandledError"
	Dec 13 14:16:09 no-preload-480987 kubelet[4407]: E1213 14:16:09.153476    4407 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkc9p" podUID="9ed00e7f-fb97-46c1-bba9-05beb5234b7e"
	Dec 13 14:16:09 no-preload-480987 kubelet[4407]: E1213 14:16:09.420640    4407 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-480987" containerName="etcd"
	Dec 13 14:16:09 no-preload-480987 kubelet[4407]: E1213 14:16:09.425333    4407 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-480987" containerName="kube-apiserver"
	Dec 13 14:16:09 no-preload-480987 kubelet[4407]: E1213 14:16:09.425655    4407 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-480987" containerName="kube-scheduler"
	Dec 13 14:16:09 no-preload-480987 kubelet[4407]: E1213 14:16:09.426720    4407 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-480987" containerName="kube-controller-manager"
	
	
	==> kubernetes-dashboard [c87ce8eecf3d] <==
	2025/12/13 14:15:20 Starting overwatch
	2025/12/13 14:15:20 Using namespace: kubernetes-dashboard
	2025/12/13 14:15:20 Using in-cluster config to connect to apiserver
	2025/12/13 14:15:20 Using secret token for csrf signing
	2025/12/13 14:15:20 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 14:15:20 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 14:15:20 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/13 14:15:20 Generating JWE encryption key
	2025/12/13 14:15:20 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 14:15:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 14:15:21 Initializing JWE encryption key from synchronized object
	2025/12/13 14:15:21 Creating in-cluster Sidecar client
	2025/12/13 14:15:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 14:15:21 Serving insecurely on HTTP port: 9090
	2025/12/13 14:16:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [7731d9ba696b] <==
	I1213 14:15:03.625799       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 14:15:33.643923       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [cc149c15604e] <==
	I1213 14:16:09.293714       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 14:16:09.325031       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 14:16:09.325859       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 14:16:09.334355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-480987 -n no-preload-480987
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-480987 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-5d785b57d4-5xl42 dashboard-metrics-scraper-867fb5f87b-nkc9p
helpers_test.go:283: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context no-preload-480987 describe pod metrics-server-5d785b57d4-5xl42 dashboard-metrics-scraper-867fb5f87b-nkc9p
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context no-preload-480987 describe pod metrics-server-5d785b57d4-5xl42 dashboard-metrics-scraper-867fb5f87b-nkc9p: exit status 1 (86.881641ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5d785b57d4-5xl42" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-nkc9p" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context no-preload-480987 describe pod metrics-server-5d785b57d4-5xl42 dashboard-metrics-scraper-867fb5f87b-nkc9p: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-480987 -n no-preload-480987
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-480987 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-480987 logs -n 25: (1.479419028s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ guest-719825 ssh which VBoxControl                                                                                                                                                                                        │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh which wget                                                                                                                                                                                               │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh which socat                                                                                                                                                                                              │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh which git                                                                                                                                                                                                │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh which podman                                                                                                                                                                                             │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh which iptables                                                                                                                                                                                           │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh which docker                                                                                                                                                                                             │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh which curl                                                                                                                                                                                               │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh df -t ext4 /data | grep /data                                                                                                                                                                            │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh df -t ext4 /var/lib/minikube | grep /var/lib/minikube                                                                                                                                                    │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker                                                                                                                                              │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox                                                                                                                                                      │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh df -t ext4 /var/lib/cni | grep /var/lib/cni                                                                                                                                                              │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet                                                                                                                                                      │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh df -t ext4 /var/lib/docker | grep /var/lib/docker                                                                                                                                                        │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh cat /version.json                                                                                                                                                                                        │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ ssh     │ guest-719825 ssh test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'                                                                                                                                         │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ delete  │ -p guest-719825                                                                                                                                                                                                           │ guest-719825      │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ image   │ no-preload-480987 image list --format=json                                                                                                                                                                                │ no-preload-480987 │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ pause   │ -p no-preload-480987 --alsologtostderr -v=1                                                                                                                                                                               │ no-preload-480987 │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ addons  │ enable metrics-server -p newest-cni-994510 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                   │ newest-cni-994510 │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:15 UTC │
	│ stop    │ -p newest-cni-994510 --alsologtostderr -v=3                                                                                                                                                                               │ newest-cni-994510 │ jenkins │ v1.37.0 │ 13 Dec 25 14:15 UTC │ 13 Dec 25 14:16 UTC │
	│ addons  │ enable dashboard -p newest-cni-994510 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                              │ newest-cni-994510 │ jenkins │ v1.37.0 │ 13 Dec 25 14:16 UTC │ 13 Dec 25 14:16 UTC │
	│ start   │ -p newest-cni-994510 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0-beta.0 │ newest-cni-994510 │ jenkins │ v1.37.0 │ 13 Dec 25 14:16 UTC │                     │
	│ unpause │ -p no-preload-480987 --alsologtostderr -v=1                                                                                                                                                                               │ no-preload-480987 │ jenkins │ v1.37.0 │ 13 Dec 25 14:16 UTC │ 13 Dec 25 14:16 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 14:16:01
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 14:16:01.125524   65660 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:16:01.125796   65660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:16:01.125808   65660 out.go:374] Setting ErrFile to fd 2...
	I1213 14:16:01.125813   65660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:16:01.126005   65660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
	I1213 14:16:01.126504   65660 out.go:368] Setting JSON to false
	I1213 14:16:01.127470   65660 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7112,"bootTime":1765628249,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 14:16:01.127542   65660 start.go:143] virtualization: kvm guest
	I1213 14:16:01.130102   65660 out.go:179] * [newest-cni-994510] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 14:16:01.131794   65660 notify.go:221] Checking for updates...
	I1213 14:16:01.131884   65660 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 14:16:01.133773   65660 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:16:01.135572   65660 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-16298/kubeconfig
	I1213 14:16:01.137334   65660 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-16298/.minikube
	I1213 14:16:01.138729   65660 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 14:16:01.140547   65660 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 14:16:01.142283   65660 config.go:182] Loaded profile config "newest-cni-994510": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 14:16:01.142955   65660 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:16:01.181268   65660 out.go:179] * Using the kvm2 driver based on existing profile
	I1213 14:16:01.182751   65660 start.go:309] selected driver: kvm2
	I1213 14:16:01.182778   65660 start.go:927] validating driver "kvm2" against &{Name:newest-cni-994510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:newest-cni-994510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Star
tHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:16:01.182906   65660 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 14:16:01.183932   65660 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 14:16:01.183971   65660 cni.go:84] Creating CNI manager for ""
	I1213 14:16:01.184040   65660 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 14:16:01.184077   65660 start.go:353] cluster config:
	{Name:newest-cni-994510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-994510 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:16:01.184172   65660 iso.go:125] acquiring lock: {Name:mkdb244ed0b6c01d7604ff94d6687c3511cb9170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 14:16:01.186634   65660 out.go:179] * Starting "newest-cni-994510" primary control-plane node in "newest-cni-994510" cluster
	I1213 14:16:01.188000   65660 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 14:16:01.188043   65660 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-16298/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1213 14:16:01.188051   65660 cache.go:65] Caching tarball of preloaded images
	I1213 14:16:01.188175   65660 preload.go:238] Found /home/jenkins/minikube-integration/22122-16298/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 14:16:01.188192   65660 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1213 14:16:01.188372   65660 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/newest-cni-994510/config.json ...
	I1213 14:16:01.188674   65660 start.go:360] acquireMachinesLock for newest-cni-994510: {Name:mkb4e7ea4da4358e2127ad51f1ac2815f0b79c60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 14:16:01.188729   65660 start.go:364] duration metric: took 30.792µs to acquireMachinesLock for "newest-cni-994510"
	I1213 14:16:01.188745   65660 start.go:96] Skipping create...Using existing machine configuration
	I1213 14:16:01.188750   65660 fix.go:54] fixHost starting: 
	I1213 14:16:01.191035   65660 fix.go:112] recreateIfNeeded on newest-cni-994510: state=Stopped err=<nil>
	W1213 14:16:01.191077   65660 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 14:15:59.453385   64658 api_server.go:269] stopped: https://192.168.61.21:8444/healthz: Get "https://192.168.61.21:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 14:15:59.453464   64658 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8444/healthz ...
	I1213 14:16:01.193182   65660 out.go:252] * Restarting existing kvm2 VM for "newest-cni-994510" ...
	I1213 14:16:01.193250   65660 main.go:143] libmachine: starting domain...
	I1213 14:16:01.193262   65660 main.go:143] libmachine: ensuring networks are active...
	I1213 14:16:01.194575   65660 main.go:143] libmachine: Ensuring network default is active
	I1213 14:16:01.195131   65660 main.go:143] libmachine: Ensuring network mk-newest-cni-994510 is active
	I1213 14:16:01.195757   65660 main.go:143] libmachine: getting domain XML...
	I1213 14:16:01.197197   65660 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>newest-cni-994510</name>
	  <uuid>30fbdf00-43d2-4fb6-8630-f0db2bc365e5</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22122-16298/.minikube/machines/newest-cni-994510/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22122-16298/.minikube/machines/newest-cni-994510/newest-cni-994510.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:44:4a:b4'/>
	      <source network='mk-newest-cni-994510'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:5a:df:a1'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1213 14:16:02.595304   65660 main.go:143] libmachine: waiting for domain to start...
	I1213 14:16:02.596894   65660 main.go:143] libmachine: domain is now running
	I1213 14:16:02.596945   65660 main.go:143] libmachine: waiting for IP...
	I1213 14:16:02.597844   65660 main.go:143] libmachine: domain newest-cni-994510 has defined MAC address 52:54:00:44:4a:b4 in network mk-newest-cni-994510
	I1213 14:16:02.598831   65660 main.go:143] libmachine: domain newest-cni-994510 has current primary IP address 192.168.72.114 and MAC address 52:54:00:44:4a:b4 in network mk-newest-cni-994510
	I1213 14:16:02.598852   65660 main.go:143] libmachine: found domain IP: 192.168.72.114
	I1213 14:16:02.598859   65660 main.go:143] libmachine: reserving static IP address...
	I1213 14:16:02.599517   65660 main.go:143] libmachine: found host DHCP lease matching {name: "newest-cni-994510", mac: "52:54:00:44:4a:b4", ip: "192.168.72.114"} in network mk-newest-cni-994510: {Iface:virbr4 ExpiryTime:2025-12-13 15:15:09 +0000 UTC Type:0 Mac:52:54:00:44:4a:b4 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:newest-cni-994510 Clientid:01:52:54:00:44:4a:b4}
	I1213 14:16:02.599551   65660 main.go:143] libmachine: skip adding static IP to network mk-newest-cni-994510 - found existing host DHCP lease matching {name: "newest-cni-994510", mac: "52:54:00:44:4a:b4", ip: "192.168.72.114"}
	I1213 14:16:02.599560   65660 main.go:143] libmachine: reserved static IP address 192.168.72.114 for domain newest-cni-994510
	I1213 14:16:02.599566   65660 main.go:143] libmachine: waiting for SSH...
	I1213 14:16:02.599571   65660 main.go:143] libmachine: Getting to WaitForSSH function...
	I1213 14:16:02.602167   65660 main.go:143] libmachine: domain newest-cni-994510 has defined MAC address 52:54:00:44:4a:b4 in network mk-newest-cni-994510
	I1213 14:16:02.602671   65660 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:44:4a:b4", ip: ""} in network mk-newest-cni-994510: {Iface:virbr4 ExpiryTime:2025-12-13 15:15:09 +0000 UTC Type:0 Mac:52:54:00:44:4a:b4 Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:newest-cni-994510 Clientid:01:52:54:00:44:4a:b4}
	I1213 14:16:02.602700   65660 main.go:143] libmachine: domain newest-cni-994510 has defined IP address 192.168.72.114 and MAC address 52:54:00:44:4a:b4 in network mk-newest-cni-994510
	I1213 14:16:02.602916   65660 main.go:143] libmachine: Using SSH client type: native
	I1213 14:16:02.603157   65660 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I1213 14:16:02.603168   65660 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1213 14:16:05.663680   65660 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.72.114:22: connect: no route to host
	I1213 14:16:04.456629   64658 api_server.go:269] stopped: https://192.168.61.21:8444/healthz: Get "https://192.168.61.21:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 14:16:04.456680   64658 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8444/healthz ...
	I1213 14:16:05.791507   64658 api_server.go:269] stopped: https://192.168.61.21:8444/healthz: Get "https://192.168.61.21:8444/healthz": read tcp 192.168.61.1:54372->192.168.61.21:8444: read: connection reset by peer
	I1213 14:16:05.791551   64658 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8444/healthz ...
	I1213 14:16:05.792084   64658 api_server.go:269] stopped: https://192.168.61.21:8444/healthz: Get "https://192.168.61.21:8444/healthz": dial tcp 192.168.61.21:8444: connect: connection refused
	I1213 14:16:05.944530   64658 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8444/healthz ...
	I1213 14:16:05.945425   64658 api_server.go:269] stopped: https://192.168.61.21:8444/healthz: Get "https://192.168.61.21:8444/healthz": dial tcp 192.168.61.21:8444: connect: connection refused
	I1213 14:16:06.444164   64658 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8444/healthz ...
	I1213 14:16:06.445072   64658 api_server.go:269] stopped: https://192.168.61.21:8444/healthz: Get "https://192.168.61.21:8444/healthz": dial tcp 192.168.61.21:8444: connect: connection refused
	I1213 14:16:06.944960   64658 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8444/healthz ...
	I1213 14:16:06.945938   64658 api_server.go:269] stopped: https://192.168.61.21:8444/healthz: Get "https://192.168.61.21:8444/healthz": dial tcp 192.168.61.21:8444: connect: connection refused
	I1213 14:16:07.444767   64658 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8444/healthz ...
	I1213 14:16:07.446109   64658 api_server.go:269] stopped: https://192.168.61.21:8444/healthz: Get "https://192.168.61.21:8444/healthz": dial tcp 192.168.61.21:8444: connect: connection refused
	I1213 14:16:07.944789   64658 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8444/healthz ...
	I1213 14:16:07.945707   64658 api_server.go:269] stopped: https://192.168.61.21:8444/healthz: Get "https://192.168.61.21:8444/healthz": dial tcp 192.168.61.21:8444: connect: connection refused
	I1213 14:16:08.444385   64658 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8444/healthz ...
	I1213 14:16:08.445288   64658 api_server.go:269] stopped: https://192.168.61.21:8444/healthz: Get "https://192.168.61.21:8444/healthz": dial tcp 192.168.61.21:8444: connect: connection refused
	I1213 14:16:08.945143   64658 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8444/healthz ...
	I1213 14:16:08.945902   64658 api_server.go:269] stopped: https://192.168.61.21:8444/healthz: Get "https://192.168.61.21:8444/healthz": dial tcp 192.168.61.21:8444: connect: connection refused
	I1213 14:16:09.444766   64658 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8444/healthz ...
	I1213 14:16:09.445649   64658 api_server.go:269] stopped: https://192.168.61.21:8444/healthz: Get "https://192.168.61.21:8444/healthz": dial tcp 192.168.61.21:8444: connect: connection refused
	I1213 14:16:09.944346   64658 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8444/healthz ...
	I1213 14:16:09.945163   64658 api_server.go:269] stopped: https://192.168.61.21:8444/healthz: Get "https://192.168.61.21:8444/healthz": dial tcp 192.168.61.21:8444: connect: connection refused
	I1213 14:16:10.445127   64658 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8444/healthz ...
	I1213 14:16:10.445955   64658 api_server.go:269] stopped: https://192.168.61.21:8444/healthz: Get "https://192.168.61.21:8444/healthz": dial tcp 192.168.61.21:8444: connect: connection refused
	I1213 14:16:10.944653   64658 api_server.go:253] Checking apiserver healthz at https://192.168.61.21:8444/healthz ...
	I1213 14:16:10.945347   64658 api_server.go:269] stopped: https://192.168.61.21:8444/healthz: Get "https://192.168.61.21:8444/healthz": dial tcp 192.168.61.21:8444: connect: connection refused
	
	
	==> Docker <==
	Dec 13 14:15:20 no-preload-480987 dockerd[1186]: time="2025-12-13T14:15:20.200344652Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 14:15:20 no-preload-480987 dockerd[1186]: time="2025-12-13T14:15:20.200485128Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Dec 13 14:15:20 no-preload-480987 cri-dockerd[1566]: time="2025-12-13T14:15:20Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Dec 13 14:15:20 no-preload-480987 dockerd[1186]: time="2025-12-13T14:15:20.237676952Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 14:15:20 no-preload-480987 dockerd[1186]: time="2025-12-13T14:15:20.238207133Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 14:15:20 no-preload-480987 dockerd[1186]: time="2025-12-13T14:15:20.246967518Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Dec 13 14:15:20 no-preload-480987 dockerd[1186]: time="2025-12-13T14:15:20.247009573Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 14:15:20 no-preload-480987 dockerd[1186]: time="2025-12-13T14:15:20.978311358Z" level=info msg="ignoring event" container=c522abf03bd68d5546f765f4b5f89231a556fd352bdc3bf6c742a5b152ef313f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 14:15:21 no-preload-480987 cri-dockerd[1566]: time="2025-12-13T14:15:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c1f03d7fa4950bf1999afa71cea62fd1bcf1d2684c789709041868d8f710fc0e/resolv.conf as [nameserver 10.96.0.10 search kube-system.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 13 14:15:32 no-preload-480987 dockerd[1186]: time="2025-12-13T14:15:32.339669699Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 14:15:32 no-preload-480987 dockerd[1186]: time="2025-12-13T14:15:32.408770252Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 14:15:32 no-preload-480987 dockerd[1186]: time="2025-12-13T14:15:32.408895320Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Dec 13 14:15:32 no-preload-480987 cri-dockerd[1566]: time="2025-12-13T14:15:32Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Dec 13 14:15:33 no-preload-480987 dockerd[1186]: time="2025-12-13T14:15:33.760728266Z" level=error msg="Handler for POST /v1.51/containers/7731d9ba696b/pause returned error: cannot pause container 7731d9ba696bc48dd0037f538a0957012f30009a9e05e971c946977be10ff36b: OCI runtime pause failed: container not running"
	Dec 13 14:15:33 no-preload-480987 dockerd[1186]: time="2025-12-13T14:15:33.837874481Z" level=info msg="ignoring event" container=7731d9ba696bc48dd0037f538a0957012f30009a9e05e971c946977be10ff36b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 14:16:08 no-preload-480987 cri-dockerd[1566]: time="2025-12-13T14:16:08Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Dec 13 14:16:08 no-preload-480987 cri-dockerd[1566]: time="2025-12-13T14:16:08Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7d764666f9-9278n_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"df1ae620e7830da08877464b409a1a379127a6f2a24e16d49affeaf5da36304b\""
	Dec 13 14:16:08 no-preload-480987 dockerd[1186]: time="2025-12-13T14:16:08.908764997Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 14:16:08 no-preload-480987 dockerd[1186]: time="2025-12-13T14:16:08.908814325Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 14:16:08 no-preload-480987 dockerd[1186]: time="2025-12-13T14:16:08.920308371Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Dec 13 14:16:08 no-preload-480987 dockerd[1186]: time="2025-12-13T14:16:08.920350681Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Dec 13 14:16:09 no-preload-480987 dockerd[1186]: time="2025-12-13T14:16:09.044310286Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 14:16:09 no-preload-480987 dockerd[1186]: time="2025-12-13T14:16:09.143181834Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Dec 13 14:16:09 no-preload-480987 dockerd[1186]: time="2025-12-13T14:16:09.143362360Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Dec 13 14:16:09 no-preload-480987 cri-dockerd[1566]: time="2025-12-13T14:16:09Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	cc149c15604ed       6e38f40d628db                                                                                         3 seconds ago        Running             storage-provisioner       2                   7fe73cfac55b5       storage-provisioner                         kube-system
	c87ce8eecf3dc       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        52 seconds ago       Running             kubernetes-dashboard      0                   e4c80e4356825       kubernetes-dashboard-b84665fb8-qgkp8        kubernetes-dashboard
	12db3d62fa358       56cc512116c8f                                                                                         About a minute ago   Running             busybox                   1                   c4d19dba95faf       busybox                                     default
	df6bc06c07314       aa5e3ebc0dfed                                                                                         About a minute ago   Running             coredns                   1                   42e2df8bc0c2a       coredns-7d764666f9-vqfqb                    kube-system
	d56ac35f2023e       8a4ded35a3eb1                                                                                         About a minute ago   Running             kube-proxy                1                   4df6888cada75       kube-proxy-bcqzf                            kube-system
	7731d9ba696bc       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   7fe73cfac55b5       storage-provisioner                         kube-system
	bb9406d173c82       7bb6219ddab95                                                                                         About a minute ago   Running             kube-scheduler            1                   598ae50e4090f       kube-scheduler-no-preload-480987            kube-system
	abc673268b8c4       a3e246e9556e9                                                                                         About a minute ago   Running             etcd                      1                   f25680d6231bd       etcd-no-preload-480987                      kube-system
	f15386049dc5d       45f3cc72d235f                                                                                         About a minute ago   Running             kube-controller-manager   1                   7c3c0ac1e767d       kube-controller-manager-no-preload-480987   kube-system
	c04badbd06c59       aa9d02839d8de                                                                                         About a minute ago   Running             kube-apiserver            1                   894e50d9bbd2f       kube-apiserver-no-preload-480987            kube-system
	a753bda60e00b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   2 minutes ago        Exited              busybox                   0                   3efacce8eff61       busybox                                     default
	a83817d1e3a19       aa5e3ebc0dfed                                                                                         2 minutes ago        Exited              coredns                   0                   bbeedeba027f5       coredns-7d764666f9-vqfqb                    kube-system
	825b5a74aef54       8a4ded35a3eb1                                                                                         2 minutes ago        Exited              kube-proxy                0                   58393cab0a018       kube-proxy-bcqzf                            kube-system
	dbcd28d379e9d       7bb6219ddab95                                                                                         2 minutes ago        Exited              kube-scheduler            0                   3aeb2c8b83364       kube-scheduler-no-preload-480987            kube-system
	421c3cd800264       a3e246e9556e9                                                                                         2 minutes ago        Exited              etcd                      0                   f584e9b37f307       etcd-no-preload-480987                      kube-system
	0a4ff8bbd246b       45f3cc72d235f                                                                                         2 minutes ago        Exited              kube-controller-manager   0                   3a909272bcfee       kube-controller-manager-no-preload-480987   kube-system
	15efb3b314731       aa9d02839d8de                                                                                         2 minutes ago        Exited              kube-apiserver            0                   6cd8631e870c0       kube-apiserver-no-preload-480987            kube-system
	
	
	==> coredns [a83817d1e3a1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	[INFO] Reloading complete
	[INFO] 127.0.0.1:48197 - 36083 "HINFO IN 948520708112921410.8802066444027197549. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.08414206s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [df6bc06c0731] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:41339 - 43934 "HINFO IN 5178304912045032897.7220680391157509907. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.10370268s
	
	
	==> describe nodes <==
	Name:               no-preload-480987
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-480987
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=no-preload-480987
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T14_13_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 14:13:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-480987
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 14:16:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 14:16:08 +0000   Sat, 13 Dec 2025 14:13:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 14:16:08 +0000   Sat, 13 Dec 2025 14:13:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 14:16:08 +0000   Sat, 13 Dec 2025 14:13:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 14:16:08 +0000   Sat, 13 Dec 2025 14:15:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.249
	  Hostname:    no-preload-480987
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 a518b2b6861e4d398d1398567a956c88
	  System UUID:                a518b2b6-861e-4d39-8d13-98567a956c88
	  Boot ID:                    f2072675-ae25-45ab-b1ff-1c552f111941
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 coredns-7d764666f9-vqfqb                      100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     2m47s
	  kube-system                 etcd-no-preload-480987                        100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m52s
	  kube-system                 kube-apiserver-no-preload-480987              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m52s
	  kube-system                 kube-controller-manager-no-preload-480987     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m53s
	  kube-system                 kube-proxy-bcqzf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m47s
	  kube-system                 kube-scheduler-no-preload-480987              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m52s
	  kube-system                 metrics-server-5d785b57d4-5xl42               100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         119s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-nkc9p    0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-qgkp8          0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  2m48s  node-controller  Node no-preload-480987 event: Registered Node no-preload-480987 in Controller
	  Normal  RegisteredNode  67s    node-controller  Node no-preload-480987 event: Registered Node no-preload-480987 in Controller
	
	
	==> dmesg <==
	[Dec13 14:14] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001357] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.010383] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.784672] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000030] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000003] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.155485] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.144137] kauditd_printk_skb: 393 callbacks suppressed
	[  +1.726112] kauditd_printk_skb: 161 callbacks suppressed
	[Dec13 14:15] kauditd_printk_skb: 110 callbacks suppressed
	[  +0.000056] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.837184] kauditd_printk_skb: 223 callbacks suppressed
	[  +0.228037] kauditd_printk_skb: 72 callbacks suppressed
	[Dec13 14:16] kauditd_printk_skb: 35 callbacks suppressed
	
	
	==> etcd [421c3cd80026] <==
	{"level":"warn","ts":"2025-12-13T14:13:15.360075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:13:15.366738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:13:15.382219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:13:15.388352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:13:15.481328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45230","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T14:13:30.479876Z","caller":"traceutil/trace.go:172","msg":"trace[1833513221] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"124.821045ms","start":"2025-12-13T14:13:30.354990Z","end":"2025-12-13T14:13:30.479811Z","steps":["trace[1833513221] 'process raft request'  (duration: 124.585013ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T14:13:30.758630Z","caller":"traceutil/trace.go:172","msg":"trace[2140602732] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"136.602392ms","start":"2025-12-13T14:13:30.622014Z","end":"2025-12-13T14:13:30.758616Z","steps":["trace[2140602732] 'process raft request'  (duration: 136.409305ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T14:14:15.200825Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-13T14:14:15.202393Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"no-preload-480987","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.249:2380"],"advertise-client-urls":["https://192.168.83.249:2379"]}
	{"level":"error","ts":"2025-12-13T14:14:15.202578Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T14:14:22.207006Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T14:14:22.210578Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T14:14:22.210910Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f03e5af8f7ea6d24","current-leader-member-id":"f03e5af8f7ea6d24"}
	{"level":"info","ts":"2025-12-13T14:14:22.211541Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-13T14:14:22.211817Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-13T14:14:22.214632Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T14:14:22.214878Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T14:14:22.214910Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-13T14:14:22.215259Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.249:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T14:14:22.215416Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.249:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T14:14:22.215558Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.249:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T14:14:22.218997Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.83.249:2380"}
	{"level":"error","ts":"2025-12-13T14:14:22.219273Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.249:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T14:14:22.219421Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.83.249:2380"}
	{"level":"info","ts":"2025-12-13T14:14:22.219571Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"no-preload-480987","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.249:2380"],"advertise-client-urls":["https://192.168.83.249:2379"]}
	
	
	==> etcd [abc673268b8c] <==
	{"level":"warn","ts":"2025-12-13T14:15:00.561226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.567555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.582549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.597405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.610812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.623256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.636624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.646981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.659299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.682561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.687891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.715178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.740572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.754560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.765201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T14:15:00.833533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49436","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T14:15:16.036041Z","caller":"traceutil/trace.go:172","msg":"trace[398394203] linearizableReadLoop","detail":"{readStateIndex:775; appliedIndex:775; }","duration":"193.985562ms","start":"2025-12-13T14:15:15.842027Z","end":"2025-12-13T14:15:16.036013Z","steps":["trace[398394203] 'read index received'  (duration: 193.980301ms)","trace[398394203] 'applied index is now lower than readState.Index'  (duration: 4.69µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T14:15:16.036309Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"194.210969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T14:15:16.036359Z","caller":"traceutil/trace.go:172","msg":"trace[742166215] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:727; }","duration":"194.326868ms","start":"2025-12-13T14:15:15.842021Z","end":"2025-12-13T14:15:16.036348Z","steps":["trace[742166215] 'agreement among raft nodes before linearized reading'  (duration: 194.179953ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T14:15:21.850284Z","caller":"traceutil/trace.go:172","msg":"trace[1856919751] transaction","detail":"{read_only:false; response_revision:752; number_of_response:1; }","duration":"119.966499ms","start":"2025-12-13T14:15:21.730293Z","end":"2025-12-13T14:15:21.850259Z","steps":["trace[1856919751] 'process raft request'  (duration: 119.771866ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T14:15:21.870555Z","caller":"traceutil/trace.go:172","msg":"trace[1096756970] transaction","detail":"{read_only:false; response_revision:753; number_of_response:1; }","duration":"139.221775ms","start":"2025-12-13T14:15:21.731316Z","end":"2025-12-13T14:15:21.870538Z","steps":["trace[1096756970] 'process raft request'  (duration: 139.104459ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T14:15:22.391408Z","caller":"traceutil/trace.go:172","msg":"trace[135788815] linearizableReadLoop","detail":"{readStateIndex:807; appliedIndex:807; }","duration":"118.341068ms","start":"2025-12-13T14:15:22.273045Z","end":"2025-12-13T14:15:22.391386Z","steps":["trace[135788815] 'read index received'  (duration: 118.33172ms)","trace[135788815] 'applied index is now lower than readState.Index'  (duration: 8.341µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T14:15:22.391569Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.496908ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T14:15:22.391603Z","caller":"traceutil/trace.go:172","msg":"trace[588260666] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:757; }","duration":"118.552652ms","start":"2025-12-13T14:15:22.273037Z","end":"2025-12-13T14:15:22.391589Z","steps":["trace[588260666] 'agreement among raft nodes before linearized reading'  (duration: 118.470061ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T14:15:22.391585Z","caller":"traceutil/trace.go:172","msg":"trace[600189230] transaction","detail":"{read_only:false; response_revision:758; number_of_response:1; }","duration":"155.435049ms","start":"2025-12-13T14:15:22.236137Z","end":"2025-12-13T14:15:22.391572Z","steps":["trace[600189230] 'process raft request'  (duration: 155.304345ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:16:12 up 1 min,  0 users,  load average: 2.14, 0.81, 0.30
	Linux no-preload-480987 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Dec 13 11:18:23 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [15efb3b31473] <==
	W1213 14:14:24.445471       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.581281       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.594077       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.594172       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.618785       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.664786       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.697063       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.708234       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.756333       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.762851       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.774633       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.796479       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.805360       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.817094       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.822343       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.872137       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.890323       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.926833       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.966475       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.985546       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.988201       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:24.999081       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:25.021870       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:25.025487       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 14:14:25.154191       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [c04badbd06c5] <==
	E1213 14:15:02.854728       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1213 14:15:02.855393       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 14:15:03.482345       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.83.249]
	I1213 14:15:03.487471       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 14:15:04.195966       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 14:15:04.274505       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 14:15:04.337748       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 14:15:04.356516       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 14:15:05.251412       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 14:15:05.435965       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 14:15:07.692640       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 14:15:08.351748       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.118.59"}
	I1213 14:15:08.403429       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.234.215"}
	W1213 14:16:06.893461       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 14:16:06.893817       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1213 14:16:06.893851       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 14:16:06.962470       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 14:16:06.969263       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1213 14:16:06.969326       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [0a4ff8bbd246] <==
	I1213 14:13:24.171667       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.173371       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.173673       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.227517       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 14:13:24.171973       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.173119       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.173204       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.173288       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.173453       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.173836       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.174040       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.243849       1 range_allocator.go:177] "Sending events to api server"
	I1213 14:13:24.243896       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1213 14:13:24.243904       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 14:13:24.243916       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.174139       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.174232       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.174313       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.174392       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.288212       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-480987" podCIDRs=["10.244.0.0/24"]
	I1213 14:13:24.328300       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.372441       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:24.372523       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 14:13:24.372530       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1213 14:13:29.188585       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-controller-manager [f15386049dc5] <==
	I1213 14:15:05.123173       1 shared_informer.go:377] "Caches are synced"
	I1213 14:15:05.127814       1 shared_informer.go:377] "Caches are synced"
	I1213 14:15:05.145713       1 shared_informer.go:377] "Caches are synced"
	I1213 14:15:05.145767       1 shared_informer.go:377] "Caches are synced"
	I1213 14:15:05.145867       1 shared_informer.go:377] "Caches are synced"
	I1213 14:15:05.148926       1 shared_informer.go:377] "Caches are synced"
	I1213 14:15:05.151356       1 shared_informer.go:377] "Caches are synced"
	I1213 14:15:05.153402       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1213 14:15:05.131028       1 shared_informer.go:377] "Caches are synced"
	I1213 14:15:05.166930       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1213 14:15:05.168763       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-480987"
	I1213 14:15:05.168796       1 shared_informer.go:377] "Caches are synced"
	I1213 14:15:05.178980       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1213 14:15:05.275565       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 14:15:05.376144       1 shared_informer.go:377] "Caches are synced"
	I1213 14:15:05.376162       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 14:15:05.376168       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1213 14:15:05.377426       1 shared_informer.go:377] "Caches are synced"
	E1213 14:15:07.975850       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b\" failed with pods \"dashboard-metrics-scraper-867fb5f87b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 14:15:08.023416       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b\" failed with pods \"dashboard-metrics-scraper-867fb5f87b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 14:15:08.076776       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b\" failed with pods \"dashboard-metrics-scraper-867fb5f87b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 14:15:08.087381       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1213 14:15:10.180464       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	E1213 14:16:06.970751       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1213 14:16:06.998273       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [825b5a74aef5] <==
	I1213 14:13:27.434952       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 14:13:27.537239       1 shared_informer.go:377] "Caches are synced"
	I1213 14:13:27.537315       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.249"]
	E1213 14:13:27.541477       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 14:13:27.890996       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 14:13:27.891076       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 14:13:27.891101       1 server_linux.go:136] "Using iptables Proxier"
	I1213 14:13:28.046345       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 14:13:28.047596       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 14:13:28.047613       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 14:13:28.069007       1 config.go:200] "Starting service config controller"
	I1213 14:13:28.069351       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 14:13:28.069786       1 config.go:106] "Starting endpoint slice config controller"
	I1213 14:13:28.069797       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 14:13:28.084631       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 14:13:28.084652       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 14:13:28.092180       1 config.go:309] "Starting node config controller"
	I1213 14:13:28.092221       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 14:13:28.092229       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 14:13:28.172328       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 14:13:28.172494       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 14:13:28.185119       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [d56ac35f2023] <==
	I1213 14:15:04.179196       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 14:15:04.280524       1 shared_informer.go:377] "Caches are synced"
	I1213 14:15:04.282161       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.249"]
	E1213 14:15:04.282304       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 14:15:04.416551       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 14:15:04.416804       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 14:15:04.416966       1 server_linux.go:136] "Using iptables Proxier"
	I1213 14:15:04.483468       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 14:15:04.486426       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 14:15:04.486470       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 14:15:04.514733       1 config.go:200] "Starting service config controller"
	I1213 14:15:04.514829       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 14:15:04.514848       1 config.go:106] "Starting endpoint slice config controller"
	I1213 14:15:04.514852       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 14:15:04.514869       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 14:15:04.514873       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 14:15:04.531338       1 config.go:309] "Starting node config controller"
	I1213 14:15:04.547621       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 14:15:04.549356       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 14:15:04.619402       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 14:15:04.632403       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 14:15:04.632548       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [bb9406d173c8] <==
	I1213 14:14:59.768053       1 serving.go:386] Generated self-signed cert in-memory
	W1213 14:15:01.618693       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 14:15:01.618832       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 14:15:01.618857       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 14:15:01.619158       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 14:15:01.741589       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1213 14:15:01.741634       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 14:15:01.749900       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 14:15:01.755671       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 14:15:01.758170       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 14:15:01.758530       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 14:15:01.859577       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [dbcd28d379e9] <==
	E1213 14:13:17.759489       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1213 14:13:17.761454       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1213 14:13:17.807793       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1213 14:13:17.810508       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1213 14:13:17.828149       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1213 14:13:17.830273       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1213 14:13:17.838735       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1213 14:13:17.842088       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1213 14:13:17.864932       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1213 14:13:17.868183       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1213 14:13:17.872924       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1213 14:13:17.874635       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1213 14:13:17.963042       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1213 14:13:17.965851       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1213 14:13:17.991884       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1213 14:13:17.995477       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1213 14:13:18.019764       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1213 14:13:18.022894       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1213 14:13:18.028239       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1213 14:13:18.030500       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	I1213 14:13:19.979206       1 shared_informer.go:377] "Caches are synced"
	I1213 14:14:15.158169       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1213 14:14:15.161149       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1213 14:14:15.161158       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1213 14:14:15.161189       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: I1213 14:16:08.619768    4407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04edfc90843076c87e63de2a69653f0a-usr-share-ca-certificates\") pod \"kube-apiserver-no-preload-480987\" (UID: \"04edfc90843076c87e63de2a69653f0a\") " pod="kube-system/kube-apiserver-no-preload-480987"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: I1213 14:16:08.619803    4407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/6eb935396208fda222fa24605b775590-etcd-certs\") pod \"etcd-no-preload-480987\" (UID: \"6eb935396208fda222fa24605b775590\") " pod="kube-system/etcd-no-preload-480987"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: I1213 14:16:08.619860    4407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04edfc90843076c87e63de2a69653f0a-ca-certs\") pod \"kube-apiserver-no-preload-480987\" (UID: \"04edfc90843076c87e63de2a69653f0a\") " pod="kube-system/kube-apiserver-no-preload-480987"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: I1213 14:16:08.619874    4407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/6eb935396208fda222fa24605b775590-etcd-data\") pod \"etcd-no-preload-480987\" (UID: \"6eb935396208fda222fa24605b775590\") " pod="kube-system/etcd-no-preload-480987"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: I1213 14:16:08.660740    4407 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: I1213 14:16:08.720736    4407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fc17afa-ac4f-4744-b380-dff0bf9d5f12-lib-modules\") pod \"kube-proxy-bcqzf\" (UID: \"9fc17afa-ac4f-4744-b380-dff0bf9d5f12\") " pod="kube-system/kube-proxy-bcqzf"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: I1213 14:16:08.720913    4407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fc17afa-ac4f-4744-b380-dff0bf9d5f12-xtables-lock\") pod \"kube-proxy-bcqzf\" (UID: \"9fc17afa-ac4f-4744-b380-dff0bf9d5f12\") " pod="kube-system/kube-proxy-bcqzf"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: I1213 14:16:08.720958    4407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/45d578da-7a44-4ee1-8dd2-77c3d4816633-tmp\") pod \"storage-provisioner\" (UID: \"45d578da-7a44-4ee1-8dd2-77c3d4816633\") " pod="kube-system/storage-provisioner"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: I1213 14:16:08.871656    4407 scope.go:122] "RemoveContainer" containerID="7731d9ba696bc48dd0037f538a0957012f30009a9e05e971c946977be10ff36b"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: E1213 14:16:08.922718    4407 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: E1213 14:16:08.922803    4407 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: E1213 14:16:08.923293    4407 kuberuntime_manager.go:1664] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-5d785b57d4-5xl42_kube-system(8be1da2d-1636-4055-8e9d-5ff3844c3e45): ErrImagePull: Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" logger="UnhandledError"
	Dec 13 14:16:08 no-preload-480987 kubelet[4407]: E1213 14:16:08.923332    4407 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-5xl42" podUID="8be1da2d-1636-4055-8e9d-5ff3844c3e45"
	Dec 13 14:16:09 no-preload-480987 kubelet[4407]: E1213 14:16:09.152638    4407 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Dec 13 14:16:09 no-preload-480987 kubelet[4407]: E1213 14:16:09.152706    4407 kuberuntime_image.go:43] "Failed to pull image" err="Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Dec 13 14:16:09 no-preload-480987 kubelet[4407]: E1213 14:16:09.153039    4407 kuberuntime_manager.go:1664] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-867fb5f87b-nkc9p_kubernetes-dashboard(9ed00e7f-fb97-46c1-bba9-05beb5234b7e): ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" logger="UnhandledError"
	Dec 13 14:16:09 no-preload-480987 kubelet[4407]: E1213 14:16:09.153476    4407 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkc9p" podUID="9ed00e7f-fb97-46c1-bba9-05beb5234b7e"
	Dec 13 14:16:09 no-preload-480987 kubelet[4407]: E1213 14:16:09.420640    4407 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-480987" containerName="etcd"
	Dec 13 14:16:09 no-preload-480987 kubelet[4407]: E1213 14:16:09.425333    4407 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-480987" containerName="kube-apiserver"
	Dec 13 14:16:09 no-preload-480987 kubelet[4407]: E1213 14:16:09.425655    4407 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-480987" containerName="kube-scheduler"
	Dec 13 14:16:09 no-preload-480987 kubelet[4407]: E1213 14:16:09.426720    4407 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-480987" containerName="kube-controller-manager"
	Dec 13 14:16:10 no-preload-480987 kubelet[4407]: E1213 14:16:10.436011    4407 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-480987" containerName="etcd"
	Dec 13 14:16:10 no-preload-480987 kubelet[4407]: E1213 14:16:10.436948    4407 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-480987" containerName="kube-apiserver"
	Dec 13 14:16:10 no-preload-480987 kubelet[4407]: E1213 14:16:10.437292    4407 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-480987" containerName="kube-scheduler"
	Dec 13 14:16:11 no-preload-480987 kubelet[4407]: E1213 14:16:11.576203    4407 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vqfqb" containerName="coredns"
	
	
	==> kubernetes-dashboard [c87ce8eecf3d] <==
	2025/12/13 14:15:20 Starting overwatch
	2025/12/13 14:15:20 Using namespace: kubernetes-dashboard
	2025/12/13 14:15:20 Using in-cluster config to connect to apiserver
	2025/12/13 14:15:20 Using secret token for csrf signing
	2025/12/13 14:15:20 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 14:15:20 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 14:15:20 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/13 14:15:20 Generating JWE encryption key
	2025/12/13 14:15:20 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 14:15:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 14:15:21 Initializing JWE encryption key from synchronized object
	2025/12/13 14:15:21 Creating in-cluster Sidecar client
	2025/12/13 14:15:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 14:15:21 Serving insecurely on HTTP port: 9090
	2025/12/13 14:16:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [7731d9ba696b] <==
	I1213 14:15:03.625799       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 14:15:33.643923       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [cc149c15604e] <==
	I1213 14:16:09.293714       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 14:16:09.325031       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 14:16:09.325859       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 14:16:09.334355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-480987 -n no-preload-480987
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-480987 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-5d785b57d4-5xl42 dashboard-metrics-scraper-867fb5f87b-nkc9p
helpers_test.go:283: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context no-preload-480987 describe pod metrics-server-5d785b57d4-5xl42 dashboard-metrics-scraper-867fb5f87b-nkc9p
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context no-preload-480987 describe pod metrics-server-5d785b57d4-5xl42 dashboard-metrics-scraper-867fb5f87b-nkc9p: exit status 1 (87.043387ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5d785b57d4-5xl42" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-nkc9p" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context no-preload-480987 describe pod metrics-server-5d785b57d4-5xl42 dashboard-metrics-scraper-867fb5f87b-nkc9p: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/Pause (40.36s)

                                                
                                    

Test pass (406/452)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.87
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.17
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.17
12 TestDownloadOnly/v1.34.2/json-events 2.67
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.18
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.17
21 TestDownloadOnly/v1.35.0-beta.0/json-events 2.83
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.18
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.16
30 TestBinaryMirror 0.7
31 TestOffline 99.81
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 216.18
38 TestAddons/serial/Volcano 47.68
40 TestAddons/serial/GCPAuth/Namespaces 0.15
41 TestAddons/serial/GCPAuth/FakeCredentials 10.69
44 TestAddons/parallel/Registry 18.88
45 TestAddons/parallel/RegistryCreds 0.66
46 TestAddons/parallel/Ingress 20.81
47 TestAddons/parallel/InspektorGadget 12.05
48 TestAddons/parallel/MetricsServer 7.37
50 TestAddons/parallel/CSI 64.26
51 TestAddons/parallel/Headlamp 21.97
52 TestAddons/parallel/CloudSpanner 5.66
53 TestAddons/parallel/LocalPath 50.27
54 TestAddons/parallel/NvidiaDevicePlugin 6.5
55 TestAddons/parallel/Yakd 12.66
57 TestAddons/StoppedEnableDisable 14.04
58 TestCertOptions 72.62
59 TestCertExpiration 312.92
60 TestDockerFlags 92.13
61 TestForceSystemdFlag 74.6
62 TestForceSystemdEnv 52.63
67 TestErrorSpam/setup 45.24
68 TestErrorSpam/start 0.4
69 TestErrorSpam/status 0.77
70 TestErrorSpam/pause 1.48
71 TestErrorSpam/unpause 1.88
72 TestErrorSpam/stop 17.18
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 88.55
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 54.53
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.1
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.37
84 TestFunctional/serial/CacheCmd/cache/add_local 1.41
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.07
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.19
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.15
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 56.72
93 TestFunctional/serial/ComponentHealth 0.08
94 TestFunctional/serial/LogsCmd 1.23
95 TestFunctional/serial/LogsFileCmd 1.2
96 TestFunctional/serial/InvalidService 4.63
98 TestFunctional/parallel/ConfigCmd 0.47
99 TestFunctional/parallel/DashboardCmd 14.81
100 TestFunctional/parallel/DryRun 0.26
101 TestFunctional/parallel/InternationalLanguage 0.12
102 TestFunctional/parallel/StatusCmd 1.11
106 TestFunctional/parallel/ServiceCmdConnect 8.63
107 TestFunctional/parallel/AddonsCmd 0.17
108 TestFunctional/parallel/PersistentVolumeClaim 34.91
110 TestFunctional/parallel/SSHCmd 0.42
111 TestFunctional/parallel/CpCmd 1.44
112 TestFunctional/parallel/MySQL 46.21
113 TestFunctional/parallel/FileSync 0.19
114 TestFunctional/parallel/CertSync 1.37
118 TestFunctional/parallel/NodeLabels 0.09
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.2
122 TestFunctional/parallel/License 0.58
123 TestFunctional/parallel/ServiceCmd/DeployApp 11.31
133 TestFunctional/parallel/Version/short 0.06
134 TestFunctional/parallel/Version/components 0.5
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.36
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
139 TestFunctional/parallel/ImageCommands/ImageBuild 5.69
140 TestFunctional/parallel/ImageCommands/Setup 1.56
141 TestFunctional/parallel/ServiceCmd/List 0.83
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.84
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.18
144 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
145 TestFunctional/parallel/ServiceCmd/Format 0.38
146 TestFunctional/parallel/ServiceCmd/URL 0.32
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.9
148 TestFunctional/parallel/MountCmd/any-port 11.02
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.64
150 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
152 TestFunctional/parallel/ProfileCmd/profile_list 0.43
153 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
154 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
155 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.85
156 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.65
157 TestFunctional/parallel/DockerEnv/bash 0.83
158 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
159 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
160 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
161 TestFunctional/parallel/MountCmd/specific-port 1.63
162 TestFunctional/parallel/MountCmd/VerifyCleanup 1.27
163 TestFunctional/delete_echo-server_images 0.04
164 TestFunctional/delete_my-image_image 0.02
165 TestFunctional/delete_minikube_cached_images 0.02
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 86.15
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 59.09
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.09
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.39
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.31
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.07
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.07
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.2
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.16
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.14
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.14
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.13
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 53.77
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.17
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.17
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.06
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.47
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 14.37
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.28
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.14
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.98
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 8.49
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.16
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 30.2
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.37
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.18
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 50.81
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.22
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.27
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.09
214 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.19
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.38
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.08
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.71
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.36
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.22
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.21
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.25
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 4.42
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.72
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 8.41
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.24
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.81
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.46
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.4
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.47
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.73
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.51
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 9.18
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.54
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.49
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.48
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.64
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.16
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/bash 0.94
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.08
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.09
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.08
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 1.25
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 1.26
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.34
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.33
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.32
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
260 TestGvisorAddon 203.07
263 TestMultiControlPlane/serial/StartCluster 240.48
264 TestMultiControlPlane/serial/DeployApp 7.41
265 TestMultiControlPlane/serial/PingHostFromPods 1.67
266 TestMultiControlPlane/serial/AddWorkerNode 53.21
267 TestMultiControlPlane/serial/NodeLabels 0.08
268 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.76
269 TestMultiControlPlane/serial/CopyFile 12.06
270 TestMultiControlPlane/serial/StopSecondaryNode 14.26
271 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.59
272 TestMultiControlPlane/serial/RestartSecondaryNode 32.38
273 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.89
274 TestMultiControlPlane/serial/RestartClusterKeepsNodes 167.88
275 TestMultiControlPlane/serial/DeleteSecondaryNode 8.25
276 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.59
277 TestMultiControlPlane/serial/StopCluster 41.04
278 TestMultiControlPlane/serial/RestartCluster 121.57
279 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.57
280 TestMultiControlPlane/serial/AddSecondaryNode 86.75
281 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.77
284 TestImageBuild/serial/Setup 48.89
285 TestImageBuild/serial/NormalBuild 2.02
286 TestImageBuild/serial/BuildWithBuildArg 1.43
287 TestImageBuild/serial/BuildWithDockerIgnore 1.41
288 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.86
293 TestJSONOutput/start/Command 92.43
294 TestJSONOutput/start/Audit 0
296 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
299 TestJSONOutput/pause/Command 0.71
300 TestJSONOutput/pause/Audit 0
302 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/unpause/Command 0.64
306 TestJSONOutput/unpause/Audit 0
308 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
311 TestJSONOutput/stop/Command 8.21
312 TestJSONOutput/stop/Audit 0
314 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
315 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
316 TestErrorJSONOutput 0.28
321 TestMainNoArgs 0.07
322 TestMinikubeProfile 99.9
325 TestMountStart/serial/StartWithMountFirst 25.74
326 TestMountStart/serial/VerifyMountFirst 0.34
327 TestMountStart/serial/StartWithMountSecond 24.63
328 TestMountStart/serial/VerifyMountSecond 0.34
329 TestMountStart/serial/DeleteFirst 0.72
330 TestMountStart/serial/VerifyMountPostDelete 0.35
331 TestMountStart/serial/Stop 1.44
332 TestMountStart/serial/RestartStopped 23.29
333 TestMountStart/serial/VerifyMountPostStop 0.34
336 TestMultiNode/serial/FreshStart2Nodes 122.81
337 TestMultiNode/serial/DeployApp2Nodes 6.56
338 TestMultiNode/serial/PingHostFrom2Pods 1.11
339 TestMultiNode/serial/AddNode 55.9
340 TestMultiNode/serial/MultiNodeLabels 0.08
341 TestMultiNode/serial/ProfileList 0.53
342 TestMultiNode/serial/CopyFile 6.91
343 TestMultiNode/serial/StopNode 2.68
344 TestMultiNode/serial/StartAfterStop 47.48
345 TestMultiNode/serial/RestartKeepsNodes 200.9
346 TestMultiNode/serial/DeleteNode 2.5
347 TestMultiNode/serial/StopMultiNode 27.83
348 TestMultiNode/serial/RestartMultiNode 131.94
349 TestMultiNode/serial/ValidateNameConflict 49.4
354 TestPreload 165.21
356 TestScheduledStopUnix 119.7
357 TestSkaffold 143.17
360 TestRunningBinaryUpgrade 474.55
362 TestKubernetesUpgrade 195.63
376 TestPause/serial/Start 129.44
385 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
386 TestNoKubernetes/serial/StartWithK8s 59.09
387 TestNoKubernetes/serial/StartWithStopK8s 15.63
388 TestPause/serial/SecondStartNoReconfiguration 57.83
389 TestNoKubernetes/serial/Start 24.95
390 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
391 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
392 TestNoKubernetes/serial/ProfileList 21.6
393 TestPause/serial/Pause 0.7
394 TestPause/serial/VerifyStatus 0.24
395 TestPause/serial/Unpause 0.65
396 TestPause/serial/PauseAgain 0.94
397 TestPause/serial/DeletePaused 0.88
398 TestPause/serial/VerifyDeletedResources 0.68
399 TestNoKubernetes/serial/Stop 1.78
400 TestNoKubernetes/serial/StartNoArgs 45.08
401 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
402 TestISOImage/Setup 26.35
404 TestISOImage/Binaries/crictl 0.21
405 TestISOImage/Binaries/curl 0.2
406 TestISOImage/Binaries/docker 0.2
407 TestISOImage/Binaries/git 0.2
408 TestISOImage/Binaries/iptables 0.21
409 TestISOImage/Binaries/podman 0.21
410 TestISOImage/Binaries/rsync 0.19
411 TestISOImage/Binaries/socat 0.21
412 TestISOImage/Binaries/wget 0.22
413 TestISOImage/Binaries/VBoxControl 0.21
414 TestISOImage/Binaries/VBoxService 0.2
415 TestStoppedBinaryUpgrade/Setup 0.77
416 TestStoppedBinaryUpgrade/Upgrade 144.65
417 TestNetworkPlugins/group/auto/Start 94.74
418 TestNetworkPlugins/group/kindnet/Start 91.63
419 TestStoppedBinaryUpgrade/MinikubeLogs 1.75
420 TestNetworkPlugins/group/calico/Start 114
421 TestNetworkPlugins/group/auto/KubeletFlags 0.26
422 TestNetworkPlugins/group/auto/NetCatPod 13.38
423 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
424 TestNetworkPlugins/group/auto/DNS 0.24
425 TestNetworkPlugins/group/auto/Localhost 0.21
426 TestNetworkPlugins/group/auto/HairPin 0.2
427 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
428 TestNetworkPlugins/group/kindnet/NetCatPod 12.36
429 TestNetworkPlugins/group/custom-flannel/Start 70.39
430 TestNetworkPlugins/group/kindnet/DNS 0.22
431 TestNetworkPlugins/group/kindnet/Localhost 0.2
432 TestNetworkPlugins/group/kindnet/HairPin 0.22
433 TestNetworkPlugins/group/false/Start 113.52
434 TestNetworkPlugins/group/enable-default-cni/Start 124.96
435 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
436 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.32
437 TestNetworkPlugins/group/calico/ControllerPod 6.01
438 TestNetworkPlugins/group/custom-flannel/DNS 0.26
439 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
440 TestNetworkPlugins/group/custom-flannel/HairPin 0.24
441 TestNetworkPlugins/group/calico/KubeletFlags 0.24
442 TestNetworkPlugins/group/calico/NetCatPod 13.33
443 TestNetworkPlugins/group/calico/DNS 0.23
444 TestNetworkPlugins/group/flannel/Start 72.71
445 TestNetworkPlugins/group/calico/Localhost 0.23
446 TestNetworkPlugins/group/calico/HairPin 0.19
447 TestNetworkPlugins/group/false/KubeletFlags 0.2
448 TestNetworkPlugins/group/false/NetCatPod 11.33
449 TestNetworkPlugins/group/bridge/Start 71.45
450 TestNetworkPlugins/group/false/DNS 0.25
451 TestNetworkPlugins/group/false/Localhost 0.21
452 TestNetworkPlugins/group/false/HairPin 0.21
453 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
454 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.4
455 TestNetworkPlugins/group/kubenet/Start 99.89
456 TestNetworkPlugins/group/enable-default-cni/DNS 0.3
457 TestNetworkPlugins/group/enable-default-cni/Localhost 0.23
458 TestNetworkPlugins/group/enable-default-cni/HairPin 0.24
459 TestNetworkPlugins/group/flannel/ControllerPod 6.01
461 TestStartStop/group/old-k8s-version/serial/FirstStart 109.3
462 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
463 TestNetworkPlugins/group/flannel/NetCatPod 14.35
464 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
465 TestNetworkPlugins/group/bridge/NetCatPod 13.37
466 TestNetworkPlugins/group/flannel/DNS 0.27
467 TestNetworkPlugins/group/flannel/Localhost 0.26
468 TestNetworkPlugins/group/flannel/HairPin 0.2
469 TestNetworkPlugins/group/bridge/DNS 0.31
470 TestNetworkPlugins/group/bridge/Localhost 0.23
471 TestNetworkPlugins/group/bridge/HairPin 0.22
473 TestStartStop/group/embed-certs/serial/FirstStart 94.56
475 TestStartStop/group/no-preload/serial/FirstStart 117.89
476 TestNetworkPlugins/group/kubenet/KubeletFlags 0.24
477 TestNetworkPlugins/group/kubenet/NetCatPod 12.34
478 TestNetworkPlugins/group/kubenet/DNS 0.23
479 TestNetworkPlugins/group/kubenet/Localhost 0.2
480 TestNetworkPlugins/group/kubenet/HairPin 0.2
482 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 92.16
483 TestStartStop/group/old-k8s-version/serial/DeployApp 9.49
484 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.44
485 TestStartStop/group/old-k8s-version/serial/Stop 14.67
486 TestStartStop/group/embed-certs/serial/DeployApp 11.46
487 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
488 TestStartStop/group/old-k8s-version/serial/SecondStart 54.18
489 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.38
490 TestStartStop/group/embed-certs/serial/Stop 14.54
491 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
492 TestStartStop/group/embed-certs/serial/SecondStart 51.32
493 TestStartStop/group/no-preload/serial/DeployApp 9.46
494 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.33
495 TestStartStop/group/no-preload/serial/Stop 13.65
496 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 13.01
497 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
498 TestStartStop/group/no-preload/serial/SecondStart 50.47
499 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.47
500 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.1
501 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.45
502 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.65
503 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
504 TestStartStop/group/old-k8s-version/serial/Pause 3.6
505 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.01
507 TestStartStop/group/newest-cni/serial/FirstStart 54.19
508 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
509 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 113.57
510 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
511 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
512 TestStartStop/group/embed-certs/serial/Pause 3.61
514 TestISOImage/PersistentMounts//data 0.21
515 TestISOImage/PersistentMounts//var/lib/docker 0.22
516 TestISOImage/PersistentMounts//var/lib/cni 0.21
517 TestISOImage/PersistentMounts//var/lib/kubelet 0.21
518 TestISOImage/PersistentMounts//var/lib/minikube 0.21
519 TestISOImage/PersistentMounts//var/lib/toolbox 0.2
520 TestISOImage/PersistentMounts//var/lib/boot2docker 0.2
521 TestISOImage/VersionJSON 0.22
522 TestISOImage/eBPFSupport 0.22
523 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.01
524 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
525 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
527 TestStartStop/group/newest-cni/serial/DeployApp 0
528 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.05
529 TestStartStop/group/newest-cni/serial/Stop 13.86
530 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
531 TestStartStop/group/newest-cni/serial/SecondStart 38.43
532 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
533 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
534 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.39
535 TestStartStop/group/newest-cni/serial/Pause 3.79
536 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 15.01
537 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
538 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
539 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.92
x
+
TestDownloadOnly/v1.28.0/json-events (7.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-363454 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-363454 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 : (7.873893458s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1213 13:04:47.773779   20230 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1213 13:04:47.773871   20230 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-16298/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-363454
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-363454: exit status 85 (81.170123ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                      ARGS                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-363454 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 │ download-only-363454 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:04:39
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:04:39.959872   20242 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:04:39.959995   20242 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:04:39.960000   20242 out.go:374] Setting ErrFile to fd 2...
	I1213 13:04:39.960004   20242 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:04:39.960200   20242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
	W1213 13:04:39.960340   20242 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22122-16298/.minikube/config/config.json: open /home/jenkins/minikube-integration/22122-16298/.minikube/config/config.json: no such file or directory
	I1213 13:04:39.960858   20242 out.go:368] Setting JSON to true
	I1213 13:04:39.961798   20242 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2831,"bootTime":1765628249,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:04:39.961869   20242 start.go:143] virtualization: kvm guest
	I1213 13:04:39.967088   20242 out.go:99] [download-only-363454] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:04:39.967332   20242 notify.go:221] Checking for updates...
	W1213 13:04:39.967323   20242 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22122-16298/.minikube/cache/preloaded-tarball: no such file or directory
	I1213 13:04:39.969120   20242 out.go:171] MINIKUBE_LOCATION=22122
	I1213 13:04:39.971101   20242 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:04:39.972680   20242 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22122-16298/kubeconfig
	I1213 13:04:39.974506   20242 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-16298/.minikube
	I1213 13:04:39.976473   20242 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1213 13:04:39.980370   20242 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 13:04:39.980671   20242 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:04:40.569380   20242 out.go:99] Using the kvm2 driver based on user configuration
	I1213 13:04:40.569429   20242 start.go:309] selected driver: kvm2
	I1213 13:04:40.569436   20242 start.go:927] validating driver "kvm2" against <nil>
	I1213 13:04:40.569784   20242 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 13:04:40.570309   20242 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1213 13:04:40.570492   20242 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 13:04:40.570519   20242 cni.go:84] Creating CNI manager for ""
	I1213 13:04:40.570569   20242 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 13:04:40.570579   20242 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 13:04:40.570626   20242 start.go:353] cluster config:
	{Name:download-only-363454 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-363454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:04:40.570808   20242 iso.go:125] acquiring lock: {Name:mkdb244ed0b6c01d7604ff94d6687c3511cb9170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:04:40.572870   20242 out.go:99] Downloading VM boot image ...
	I1213 13:04:40.572928   20242 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22122-16298/.minikube/cache/iso/amd64/minikube-v1.37.0-1765613186-22122-amd64.iso
	I1213 13:04:44.414715   20242 out.go:99] Starting "download-only-363454" primary control-plane node in "download-only-363454" cluster
	I1213 13:04:44.414791   20242 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1213 13:04:44.431211   20242 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1213 13:04:44.431264   20242 cache.go:65] Caching tarball of preloaded images
	I1213 13:04:44.431519   20242 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1213 13:04:44.433891   20242 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1213 13:04:44.433928   20242 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1213 13:04:44.454097   20242 preload.go:295] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1213 13:04:44.454260   20242 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> /home/jenkins/minikube-integration/22122-16298/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-363454 host does not exist
	  To start a cluster, run: "minikube start -p download-only-363454"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-363454
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (2.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-257589 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-257589 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=kvm2 : (2.674521706s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (2.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1213 13:04:50.877262   20230 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
I1213 13:04:50.877315   20230 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-16298/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-257589
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-257589: exit status 85 (82.093029ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-363454 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 │ download-only-363454 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │                     │
	│ delete  │ --all                                                                                                                                           │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ delete  │ -p download-only-363454                                                                                                                         │ download-only-363454 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ start   │ -o=json --download-only -p download-only-257589 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=kvm2 │ download-only-257589 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:04:48
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:04:48.260714   20452 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:04:48.261103   20452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:04:48.261140   20452 out.go:374] Setting ErrFile to fd 2...
	I1213 13:04:48.261146   20452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:04:48.261443   20452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
	I1213 13:04:48.262238   20452 out.go:368] Setting JSON to true
	I1213 13:04:48.263221   20452 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2839,"bootTime":1765628249,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:04:48.263304   20452 start.go:143] virtualization: kvm guest
	I1213 13:04:48.265286   20452 out.go:99] [download-only-257589] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:04:48.265473   20452 notify.go:221] Checking for updates...
	I1213 13:04:48.266994   20452 out.go:171] MINIKUBE_LOCATION=22122
	I1213 13:04:48.268619   20452 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:04:48.270194   20452 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22122-16298/kubeconfig
	I1213 13:04:48.271739   20452 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-16298/.minikube
	I1213 13:04:48.273277   20452 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-257589 host does not exist
	  To start a cluster, run: "minikube start -p download-only-257589"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-257589
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (2.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-966117 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-966117 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=kvm2 : (2.834158545s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (2.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1213 13:04:54.143527   20230 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
I1213 13:04:54.143584   20230 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-16298/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-966117
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-966117: exit status 85 (82.574715ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-363454 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2        │ download-only-363454 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ delete  │ -p download-only-363454                                                                                                                                │ download-only-363454 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ start   │ -o=json --download-only -p download-only-257589 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=kvm2        │ download-only-257589 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ delete  │ -p download-only-257589                                                                                                                                │ download-only-257589 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ start   │ -o=json --download-only -p download-only-966117 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=kvm2 │ download-only-966117 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:04:51
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:04:51.367664   20626 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:04:51.367762   20626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:04:51.367768   20626 out.go:374] Setting ErrFile to fd 2...
	I1213 13:04:51.367771   20626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:04:51.367970   20626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
	I1213 13:04:51.368450   20626 out.go:368] Setting JSON to true
	I1213 13:04:51.369312   20626 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2842,"bootTime":1765628249,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:04:51.369372   20626 start.go:143] virtualization: kvm guest
	I1213 13:04:51.371639   20626 out.go:99] [download-only-966117] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:04:51.371833   20626 notify.go:221] Checking for updates...
	I1213 13:04:51.373518   20626 out.go:171] MINIKUBE_LOCATION=22122
	I1213 13:04:51.375440   20626 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:04:51.377156   20626 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22122-16298/kubeconfig
	I1213 13:04:51.378764   20626 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-16298/.minikube
	I1213 13:04:51.380293   20626 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-966117 host does not exist
	  To start a cluster, run: "minikube start -p download-only-966117"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-966117
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.7s)

                                                
                                                
=== RUN   TestBinaryMirror
I1213 13:04:55.046180   20230 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-530202 --alsologtostderr --binary-mirror http://127.0.0.1:33747 --driver=kvm2 
helpers_test.go:176: Cleaning up "binary-mirror-530202" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-530202
--- PASS: TestBinaryMirror (0.70s)

                                                
                                    
x
+
TestOffline (99.81s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-484904 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-484904 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2 : (1m38.676526154s)
helpers_test.go:176: Cleaning up "offline-docker-484904" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-484904
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-484904: (1.135309148s)
--- PASS: TestOffline (99.81s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-597924
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-597924: exit status 85 (75.490216ms)

                                                
                                                
-- stdout --
	* Profile "addons-597924" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-597924"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-597924
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-597924: exit status 85 (71.332076ms)

                                                
                                                
-- stdout --
	* Profile "addons-597924" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-597924"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (216.18s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-597924 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-597924 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m36.181487984s)
--- PASS: TestAddons/Setup (216.18s)

                                                
                                    
x
+
TestAddons/serial/Volcano (47.68s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:886: volcano-controller stabilized in 27.389643ms
addons_test.go:878: volcano-admission stabilized in 27.472485ms
addons_test.go:870: volcano-scheduler stabilized in 27.523255ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-9q9ll" [222fea57-ab1a-4cbd-8731-faa76a651ec3] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.006078325s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-chmsv" [0ad07524-f70a-46f4-8f6b-c7632c3b64f4] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.005580928s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-klx5x" [f99993b1-8fd7-4197-9607-d49914c158d1] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005244362s
addons_test.go:905: (dbg) Run:  kubectl --context addons-597924 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-597924 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-597924 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [3cd43359-e8f8-4c71-9794-aec91c5497b7] Pending
helpers_test.go:353: "test-job-nginx-0" [3cd43359-e8f8-4c71-9794-aec91c5497b7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [3cd43359-e8f8-4c71-9794-aec91c5497b7] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 18.004474852s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-597924 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-597924 addons disable volcano --alsologtostderr -v=1: (12.172500967s)
--- PASS: TestAddons/serial/Volcano (47.68s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-597924 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-597924 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.69s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-597924 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-597924 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [7fee530e-cef6-45d3-a4db-fbc12951ce57] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [7fee530e-cef6-45d3-a4db-fbc12951ce57] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.005807636s
addons_test.go:696: (dbg) Run:  kubectl --context addons-597924 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-597924 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-597924 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.69s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 20.892658ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
I1213 13:09:40.251829   20230 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1213 13:09:40.251856   20230 kapi.go:107] duration metric: took 23.483978ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:353: "registry-6b586f9694-9bzgq" [0889c5c4-b2c8-462b-89b9-26af8821ec36] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.013757932s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-xqgnc" [6dea91c6-99bb-43d8-8901-84c8d3474106] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004070782s
addons_test.go:394: (dbg) Run:  kubectl --context addons-597924 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-597924 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-597924 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.747362234s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-597924 ip
2025/12/13 13:09:58 [DEBUG] GET http://192.168.39.44:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-597924 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.88s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 9.660874ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-597924
addons_test.go:334: (dbg) Run:  kubectl --context addons-597924 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-597924 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-597924 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-597924 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-597924 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [5feeac7f-bca8-419c-a5a3-166cff0ae170] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [5feeac7f-bca8-419c-a5a3-166cff0ae170] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.006200827s
I1213 13:10:14.237165   20230 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-597924 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-597924 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-597924 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.39.44
addons_test.go:301: (dbg) Done: nslookup hello-john.test 192.168.39.44: (1.159211466s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-597924 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-597924 addons disable ingress-dns --alsologtostderr -v=1: (2.461116123s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-597924 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-597924 addons disable ingress --alsologtostderr -v=1: (7.906808122s)
--- PASS: TestAddons/parallel/Ingress (20.81s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.05s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-j46kj" [190d3767-e513-44d6-ae6e-8e161b42ea45] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.022574977s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-597924 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-597924 addons disable inspektor-gadget --alsologtostderr -v=1: (6.023058961s)
--- PASS: TestAddons/parallel/InspektorGadget (12.05s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.37s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 33.640117ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-g6f89" [8a8c33cf-e489-4931-a013-3d1646de48a5] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.1004826s
addons_test.go:465: (dbg) Run:  kubectl --context addons-597924 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-597924 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-597924 addons disable metrics-server --alsologtostderr -v=1: (1.127391908s)
--- PASS: TestAddons/parallel/MetricsServer (7.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (64.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1213 13:09:40.228390   20230 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 23.493416ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-597924 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-597924 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [750b3304-2f1d-485a-93c7-4e1a4e2688e3] Pending
helpers_test.go:353: "task-pv-pod" [750b3304-2f1d-485a-93c7-4e1a4e2688e3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [750b3304-2f1d-485a-93c7-4e1a4e2688e3] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.006078404s
addons_test.go:574: (dbg) Run:  kubectl --context addons-597924 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-597924 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-597924 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-597924 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-597924 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-597924 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-597924 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [76396fb1-972c-41da-9080-c93f1a94e602] Pending
helpers_test.go:353: "task-pv-pod-restore" [76396fb1-972c-41da-9080-c93f1a94e602] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [76396fb1-972c-41da-9080-c93f1a94e602] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005690012s
addons_test.go:616: (dbg) Run:  kubectl --context addons-597924 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-597924 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-597924 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-597924 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-597924 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-597924 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.221843874s)
--- PASS: TestAddons/parallel/CSI (64.26s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-597924 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-597924 --alsologtostderr -v=1: (1.435127903s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-29ggm" [df44af88-74b4-4bfb-854f-32e2ec66ea0e] Pending
helpers_test.go:353: "headlamp-dfcdc64b-29ggm" [df44af88-74b4-4bfb-854f-32e2ec66ea0e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-29ggm" [df44af88-74b4-4bfb-854f-32e2ec66ea0e] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.026433048s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-597924 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-597924 addons disable headlamp --alsologtostderr -v=1: (6.508537852s)
--- PASS: TestAddons/parallel/Headlamp (21.97s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-ppz9d" [8dcf78bb-99c9-4c00-847a-4a3b05eb54c2] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.010917641s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-597924 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (50.27s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-597924 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-597924 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-597924 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [b46a4dbb-4eb9-473e-9e26-b47455f0b9b8] Pending
helpers_test.go:353: "test-local-path" [b46a4dbb-4eb9-473e-9e26-b47455f0b9b8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [b46a4dbb-4eb9-473e-9e26-b47455f0b9b8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [b46a4dbb-4eb9-473e-9e26-b47455f0b9b8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.00756038s
addons_test.go:969: (dbg) Run:  kubectl --context addons-597924 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-597924 ssh "cat /opt/local-path-provisioner/pvc-41cf3626-b214-440d-9e99-f56a07bd7f4b_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-597924 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-597924 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-597924 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-597924 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (35.204645736s)
--- PASS: TestAddons/parallel/LocalPath (50.27s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-22rp4" [31181012-956f-496b-9c14-171533ab70ab] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006618264s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-597924 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-b2zzn" [844aa529-965d-4f40-b48a-93f352949389] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006338294s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-597924 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-597924 addons disable yakd --alsologtostderr -v=1: (6.647484719s)
--- PASS: TestAddons/parallel/Yakd (12.66s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (14.04s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-597924
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-597924: (13.817795868s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-597924
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-597924
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-597924
--- PASS: TestAddons/StoppedEnableDisable (14.04s)

                                                
                                    
x
+
TestCertOptions (72.62s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-539788 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-539788 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m11.104608158s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-539788 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-539788 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-539788 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-539788" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-539788
--- PASS: TestCertOptions (72.62s)

                                                
                                    
x
+
TestCertExpiration (312.92s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-867565 --memory=3072 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-867565 --memory=3072 --cert-expiration=3m --driver=kvm2 : (1m10.816094124s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-867565 --memory=3072 --cert-expiration=8760h --driver=kvm2 
E1213 14:03:56.581174   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/skaffold-458596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-867565 --memory=3072 --cert-expiration=8760h --driver=kvm2 : (1m1.128083629s)
helpers_test.go:176: Cleaning up "cert-expiration-867565" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-867565
--- PASS: TestCertExpiration (312.92s)

                                                
                                    
x
+
TestDockerFlags (92.13s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-092301 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-092301 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m30.661264722s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-092301 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-092301 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-092301" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-092301
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-092301: (1.007748148s)
--- PASS: TestDockerFlags (92.13s)

                                                
                                    
x
+
TestForceSystemdFlag (74.6s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-634706 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-634706 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m13.212727268s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-634706 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-flag-634706" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-634706
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-634706: (1.005287323s)
--- PASS: TestForceSystemdFlag (74.60s)

                                                
                                    
x
+
TestForceSystemdEnv (52.63s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-081479 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-081479 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (51.374026421s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-081479 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-081479" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-081479
--- PASS: TestForceSystemdEnv (52.63s)

                                                
                                    
x
+
TestErrorSpam/setup (45.24s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-277927 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-277927 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-277927 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-277927 --driver=kvm2 : (45.236209841s)
--- PASS: TestErrorSpam/setup (45.24s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-277927 --log_dir /tmp/nospam-277927 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-277927 --log_dir /tmp/nospam-277927 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-277927 --log_dir /tmp/nospam-277927 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-277927 --log_dir /tmp/nospam-277927 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-277927 --log_dir /tmp/nospam-277927 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-277927 --log_dir /tmp/nospam-277927 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-277927 --log_dir /tmp/nospam-277927 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-277927 --log_dir /tmp/nospam-277927 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-277927 --log_dir /tmp/nospam-277927 pause
--- PASS: TestErrorSpam/pause (1.48s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.88s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-277927 --log_dir /tmp/nospam-277927 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-277927 --log_dir /tmp/nospam-277927 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-277927 --log_dir /tmp/nospam-277927 unpause
--- PASS: TestErrorSpam/unpause (1.88s)

                                                
                                    
x
+
TestErrorSpam/stop (17.18s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-277927 --log_dir /tmp/nospam-277927 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-277927 --log_dir /tmp/nospam-277927 stop: (13.894594843s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-277927 --log_dir /tmp/nospam-277927 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-277927 --log_dir /tmp/nospam-277927 stop: (1.683920773s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-277927 --log_dir /tmp/nospam-277927 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-277927 --log_dir /tmp/nospam-277927 stop: (1.599940601s)
--- PASS: TestErrorSpam/stop (17.18s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22122-16298/.minikube/files/etc/test/nested/copy/20230/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (88.55s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-427989 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2 
E1213 13:13:32.504543   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:13:32.511131   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:13:32.522633   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:13:32.544145   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:13:32.585645   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:13:32.667217   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:13:32.828973   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:13:33.150727   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:13:33.792957   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:13:35.074618   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:13:37.637559   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-427989 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m28.552042975s)
--- PASS: TestFunctional/serial/StartWithProxy (88.55s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (54.53s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1213 13:13:41.358219   20230 config.go:182] Loaded profile config "functional-427989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-427989 --alsologtostderr -v=8
E1213 13:13:42.759170   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:13:53.000847   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:14:13.483138   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-427989 --alsologtostderr -v=8: (54.53144382s)
functional_test.go:678: soft start took 54.532274829s for "functional-427989" cluster.
I1213 13:14:35.890002   20230 config.go:182] Loaded profile config "functional-427989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (54.53s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-427989 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-427989 /tmp/TestFunctionalserialCacheCmdcacheadd_local1510949339/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 cache add minikube-local-cache-test:functional-427989
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 cache delete minikube-local-cache-test:functional-427989
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-427989
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427989 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (186.912749ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 kubectl -- --context functional-427989 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-427989 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (56.72s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-427989 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1213 13:14:54.445140   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-427989 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (56.718453605s)
functional_test.go:776: restart took 56.718623136s for "functional-427989" cluster.
I1213 13:15:38.409346   20230 config.go:182] Loaded profile config "functional-427989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (56.72s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-427989 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-427989 logs: (1.226093587s)
--- PASS: TestFunctional/serial/LogsCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 logs --file /tmp/TestFunctionalserialLogsFileCmd3032176597/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-427989 logs --file /tmp/TestFunctionalserialLogsFileCmd3032176597/001/logs.txt: (1.202657044s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.20s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.63s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-427989 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-427989
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-427989: exit status 115 (275.922836ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.28:31033 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-427989 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-427989 delete -f testdata/invalidsvc.yaml: (1.059270746s)
--- PASS: TestFunctional/serial/InvalidService (4.63s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427989 config get cpus: exit status 14 (84.716707ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427989 config get cpus: exit status 14 (83.455605ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-427989 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-427989 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 25819: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.81s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-427989 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-427989 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (124.614622ms)

                                                
                                                
-- stdout --
	* [functional-427989] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-16298/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-16298/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:15:55.645855   25923 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:15:55.646005   25923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:15:55.646018   25923 out.go:374] Setting ErrFile to fd 2...
	I1213 13:15:55.646025   25923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:15:55.646223   25923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
	I1213 13:15:55.646749   25923 out.go:368] Setting JSON to false
	I1213 13:15:55.647694   25923 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":3507,"bootTime":1765628249,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:15:55.647770   25923 start.go:143] virtualization: kvm guest
	I1213 13:15:55.649771   25923 out.go:179] * [functional-427989] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:15:55.651132   25923 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:15:55.651145   25923 notify.go:221] Checking for updates...
	I1213 13:15:55.654480   25923 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:15:55.656031   25923 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-16298/kubeconfig
	I1213 13:15:55.657541   25923 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-16298/.minikube
	I1213 13:15:55.659170   25923 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:15:55.660430   25923 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:15:55.662095   25923 config.go:182] Loaded profile config "functional-427989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 13:15:55.662645   25923 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:15:55.699988   25923 out.go:179] * Using the kvm2 driver based on existing profile
	I1213 13:15:55.701477   25923 start.go:309] selected driver: kvm2
	I1213 13:15:55.701522   25923 start.go:927] validating driver "kvm2" against &{Name:functional-427989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-427989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.28 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:15:55.701676   25923 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:15:55.704255   25923 out.go:203] 
	W1213 13:15:55.705567   25923 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 13:15:55.706685   25923 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-427989 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-427989 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-427989 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (124.456283ms)

                                                
                                                
-- stdout --
	* [functional-427989] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-16298/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-16298/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:15:55.522687   25908 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:15:55.522820   25908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:15:55.522831   25908 out.go:374] Setting ErrFile to fd 2...
	I1213 13:15:55.522837   25908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:15:55.523227   25908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
	I1213 13:15:55.523760   25908 out.go:368] Setting JSON to false
	I1213 13:15:55.524780   25908 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":3507,"bootTime":1765628249,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:15:55.524852   25908 start.go:143] virtualization: kvm guest
	I1213 13:15:55.526776   25908 out.go:179] * [functional-427989] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1213 13:15:55.528240   25908 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:15:55.528290   25908 notify.go:221] Checking for updates...
	I1213 13:15:55.530464   25908 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:15:55.532139   25908 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-16298/kubeconfig
	I1213 13:15:55.533592   25908 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-16298/.minikube
	I1213 13:15:55.534912   25908 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:15:55.536428   25908 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:15:55.538512   25908 config.go:182] Loaded profile config "functional-427989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 13:15:55.539371   25908 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:15:55.574820   25908 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1213 13:15:55.576386   25908 start.go:309] selected driver: kvm2
	I1213 13:15:55.576435   25908 start.go:927] validating driver "kvm2" against &{Name:functional-427989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-427989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.28 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:15:55.576583   25908 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:15:55.579212   25908 out.go:203] 
	W1213 13:15:55.580545   25908 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 13:15:55.581791   25908 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-427989 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-427989 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-cqkqh" [e884f35f-e4ed-4f69-a247-41238a0adad2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-cqkqh" [e884f35f-e4ed-4f69-a247-41238a0adad2] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.011095977s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.28:30537
functional_test.go:1680: http://192.168.39.28:30537: success! body:
Request served by hello-node-connect-7d85dfc575-cqkqh

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.28:30537
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (34.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [75a5f2eb-884b-4329-a6ac-75c290475092] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00666288s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-427989 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-427989 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-427989 get pvc myclaim -o=json
I1213 13:15:51.805876   20230 retry.go:31] will retry after 2.068136987s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:265d9e1a-af75-477b-bac8-b23e6919f020 ResourceVersion:855 Generation:0 CreationTimestamp:2025-12-13 13:15:51 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001c44d70 VolumeMode:0xc001c44d80 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-427989 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-427989 apply -f testdata/storage-provisioner/pod.yaml
I1213 13:15:54.092289   20230 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [f75a6731-ecca-452c-b729-4c11606c8abe] Pending
helpers_test.go:353: "sp-pod" [f75a6731-ecca-452c-b729-4c11606c8abe] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [f75a6731-ecca-452c-b729-4c11606c8abe] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.006635822s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-427989 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-427989 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-427989 delete -f testdata/storage-provisioner/pod.yaml: (1.707826182s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-427989 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [fe66e823-c3a4-464a-8624-404fa49578b0] Pending
helpers_test.go:353: "sp-pod" [fe66e823-c3a4-464a-8624-404fa49578b0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [fe66e823-c3a4-464a-8624-404fa49578b0] Running
E1213 13:16:16.367269   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.007765086s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-427989 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (34.91s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh -n functional-427989 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 cp functional-427989:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd809760460/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh -n functional-427989 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh -n functional-427989 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (46.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-427989 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-bhsrl" [918b56df-c325-4d57-82ba-0f56805a5689] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-bhsrl" [918b56df-c325-4d57-82ba-0f56805a5689] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 33.005524599s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-427989 exec mysql-6bcdcbc558-bhsrl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-427989 exec mysql-6bcdcbc558-bhsrl -- mysql -ppassword -e "show databases;": exit status 1 (199.433555ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 13:16:37.902136   20230 retry.go:31] will retry after 997.559069ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-427989 exec mysql-6bcdcbc558-bhsrl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-427989 exec mysql-6bcdcbc558-bhsrl -- mysql -ppassword -e "show databases;": exit status 1 (184.735862ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 13:16:39.085313   20230 retry.go:31] will retry after 750.882793ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-427989 exec mysql-6bcdcbc558-bhsrl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-427989 exec mysql-6bcdcbc558-bhsrl -- mysql -ppassword -e "show databases;": exit status 1 (201.818454ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 13:16:40.038827   20230 retry.go:31] will retry after 2.526302835s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-427989 exec mysql-6bcdcbc558-bhsrl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-427989 exec mysql-6bcdcbc558-bhsrl -- mysql -ppassword -e "show databases;": exit status 1 (233.846577ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 13:16:42.800062   20230 retry.go:31] will retry after 1.746184059s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-427989 exec mysql-6bcdcbc558-bhsrl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-427989 exec mysql-6bcdcbc558-bhsrl -- mysql -ppassword -e "show databases;": exit status 1 (302.585787ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 13:16:44.850775   20230 retry.go:31] will retry after 5.68462602s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-427989 exec mysql-6bcdcbc558-bhsrl -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (46.21s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/20230/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh "sudo cat /etc/test/nested/copy/20230/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/20230.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh "sudo cat /etc/ssl/certs/20230.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/20230.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh "sudo cat /usr/share/ca-certificates/20230.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/202302.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh "sudo cat /etc/ssl/certs/202302.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/202302.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh "sudo cat /usr/share/ca-certificates/202302.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-427989 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427989 ssh "sudo systemctl is-active crio": exit status 1 (197.072303ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-427989 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-427989 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-2wg52" [3eca96ec-865d-463f-9350-3d10f2462d53] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-2wg52" [3eca96ec-865d-463f-9350-3d10f2462d53] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.00637807s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-427989 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-427989
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-427989
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-427989 image ls --format short --alsologtostderr:
I1213 13:16:06.036861   26609 out.go:360] Setting OutFile to fd 1 ...
I1213 13:16:06.037120   26609 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:16:06.037129   26609 out.go:374] Setting ErrFile to fd 2...
I1213 13:16:06.037133   26609 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:16:06.037355   26609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
I1213 13:16:06.038011   26609 config.go:182] Loaded profile config "functional-427989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 13:16:06.038110   26609 config.go:182] Loaded profile config "functional-427989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 13:16:06.040823   26609 ssh_runner.go:195] Run: systemctl --version
I1213 13:16:06.044208   26609 main.go:143] libmachine: domain functional-427989 has defined MAC address 52:54:00:d8:ae:06 in network mk-functional-427989
I1213 13:16:06.044766   26609 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d8:ae:06", ip: ""} in network mk-functional-427989: {Iface:virbr1 ExpiryTime:2025-12-13 14:12:29 +0000 UTC Type:0 Mac:52:54:00:d8:ae:06 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:functional-427989 Clientid:01:52:54:00:d8:ae:06}
I1213 13:16:06.044799   26609 main.go:143] libmachine: domain functional-427989 has defined IP address 192.168.39.28 and MAC address 52:54:00:d8:ae:06 in network mk-functional-427989
I1213 13:16:06.045018   26609 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-16298/.minikube/machines/functional-427989/id_rsa Username:docker}
I1213 13:16:06.130035   26609 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-427989 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager     │ v1.34.2           │ 01e8bacf0f500 │ 74.9MB │
│ docker.io/kubernetesui/dashboard            │ <none>            │ 07655ddf2eebe │ 246MB  │
│ localhost/my-image                          │ functional-427989 │ 6a7de2eaf672d │ 1.24MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ docker.io/kicbase/echo-server               │ functional-427989 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
│ docker.io/library/minikube-local-cache-test │ functional-427989 │ 5a4415cad7aa8 │ 30B    │
│ public.ecr.aws/nginx/nginx                  │ alpine            │ a236f84b9d5d2 │ 53.7MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.2           │ a5f569d49a979 │ 88MB   │
│ docker.io/kubernetesui/metrics-scraper      │ <none>            │ 115053965e86b │ 43.8MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/kube-scheduler              │ v1.34.2           │ 88320b5498ff2 │ 52.8MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.2           │ 8aa150647e88a │ 71.9MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-427989 image ls --format table --alsologtostderr:
I1213 13:16:12.390079   26823 out.go:360] Setting OutFile to fd 1 ...
I1213 13:16:12.390421   26823 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:16:12.390433   26823 out.go:374] Setting ErrFile to fd 2...
I1213 13:16:12.390440   26823 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:16:12.390643   26823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
I1213 13:16:12.391205   26823 config.go:182] Loaded profile config "functional-427989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 13:16:12.391324   26823 config.go:182] Loaded profile config "functional-427989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 13:16:12.394041   26823 ssh_runner.go:195] Run: systemctl --version
I1213 13:16:12.402516   26823 main.go:143] libmachine: domain functional-427989 has defined MAC address 52:54:00:d8:ae:06 in network mk-functional-427989
I1213 13:16:12.403270   26823 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d8:ae:06", ip: ""} in network mk-functional-427989: {Iface:virbr1 ExpiryTime:2025-12-13 14:12:29 +0000 UTC Type:0 Mac:52:54:00:d8:ae:06 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:functional-427989 Clientid:01:52:54:00:d8:ae:06}
I1213 13:16:12.403392   26823 main.go:143] libmachine: domain functional-427989 has defined IP address 192.168.39.28 and MAC address 52:54:00:d8:ae:06 in network mk-functional-427989
I1213 13:16:12.404034   26823 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-16298/.minikube/machines/functional-427989/id_rsa Username:docker}
I1213 13:16:12.519871   26823 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-427989 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"88000000"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"52800000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805
d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"74900000"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"71900000"},{"id":"5a4415cad7aa8d3f2eb549a93d109461c8e071c008961d0141089a929ec6f59a","repoDigests":[],"repoTag
s":["docker.io/library/minikube-local-cache-test:functional-427989"],"size":"30"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-427989","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"6a7de2eaf672ddd7fc96ce443b82e139ba1aa8be324787534ccf8b25c7383a00","repoDigests":[],"repoTags":["localhost/my-image:functional-427989"],"size":"1240000"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":[],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"53700000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-427989 image ls --format json --alsologtostderr:
I1213 13:16:12.153817   26812 out.go:360] Setting OutFile to fd 1 ...
I1213 13:16:12.154139   26812 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:16:12.154148   26812 out.go:374] Setting ErrFile to fd 2...
I1213 13:16:12.154153   26812 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:16:12.154445   26812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
I1213 13:16:12.155192   26812 config.go:182] Loaded profile config "functional-427989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 13:16:12.155332   26812 config.go:182] Loaded profile config "functional-427989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 13:16:12.157943   26812 ssh_runner.go:195] Run: systemctl --version
I1213 13:16:12.160867   26812 main.go:143] libmachine: domain functional-427989 has defined MAC address 52:54:00:d8:ae:06 in network mk-functional-427989
I1213 13:16:12.161514   26812 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d8:ae:06", ip: ""} in network mk-functional-427989: {Iface:virbr1 ExpiryTime:2025-12-13 14:12:29 +0000 UTC Type:0 Mac:52:54:00:d8:ae:06 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:functional-427989 Clientid:01:52:54:00:d8:ae:06}
I1213 13:16:12.161547   26812 main.go:143] libmachine: domain functional-427989 has defined IP address 192.168.39.28 and MAC address 52:54:00:d8:ae:06 in network mk-functional-427989
I1213 13:16:12.161745   26812 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-16298/.minikube/machines/functional-427989/id_rsa Username:docker}
I1213 13:16:12.264682   26812 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-427989 image ls --format yaml --alsologtostderr:
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "88000000"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "74900000"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "52800000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests: []
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "53700000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 5a4415cad7aa8d3f2eb549a93d109461c8e071c008961d0141089a929ec6f59a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-427989
size: "30"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "71900000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-427989
- docker.io/kicbase/echo-server:latest
size: "4940000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-427989 image ls --format yaml --alsologtostderr:
I1213 13:16:06.231919   26620 out.go:360] Setting OutFile to fd 1 ...
I1213 13:16:06.232073   26620 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:16:06.232081   26620 out.go:374] Setting ErrFile to fd 2...
I1213 13:16:06.232088   26620 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:16:06.232443   26620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
I1213 13:16:06.233340   26620 config.go:182] Loaded profile config "functional-427989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 13:16:06.233583   26620 config.go:182] Loaded profile config "functional-427989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 13:16:06.237258   26620 ssh_runner.go:195] Run: systemctl --version
I1213 13:16:06.240514   26620 main.go:143] libmachine: domain functional-427989 has defined MAC address 52:54:00:d8:ae:06 in network mk-functional-427989
I1213 13:16:06.241105   26620 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d8:ae:06", ip: ""} in network mk-functional-427989: {Iface:virbr1 ExpiryTime:2025-12-13 14:12:29 +0000 UTC Type:0 Mac:52:54:00:d8:ae:06 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:functional-427989 Clientid:01:52:54:00:d8:ae:06}
I1213 13:16:06.241142   26620 main.go:143] libmachine: domain functional-427989 has defined IP address 192.168.39.28 and MAC address 52:54:00:d8:ae:06 in network mk-functional-427989
I1213 13:16:06.241429   26620 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-16298/.minikube/machines/functional-427989/id_rsa Username:docker}
I1213 13:16:06.338193   26620 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427989 ssh pgrep buildkitd: exit status 1 (217.653028ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 image build -t localhost/my-image:functional-427989 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-427989 image build -t localhost/my-image:functional-427989 testdata/build --alsologtostderr: (5.155062594s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-427989 image build -t localhost/my-image:functional-427989 testdata/build --alsologtostderr:
I1213 13:16:06.671596   26642 out.go:360] Setting OutFile to fd 1 ...
I1213 13:16:06.671958   26642 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:16:06.671970   26642 out.go:374] Setting ErrFile to fd 2...
I1213 13:16:06.671976   26642 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:16:06.672221   26642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
I1213 13:16:06.672849   26642 config.go:182] Loaded profile config "functional-427989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 13:16:06.673709   26642 config.go:182] Loaded profile config "functional-427989": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 13:16:06.676348   26642 ssh_runner.go:195] Run: systemctl --version
I1213 13:16:06.679388   26642 main.go:143] libmachine: domain functional-427989 has defined MAC address 52:54:00:d8:ae:06 in network mk-functional-427989
I1213 13:16:06.679935   26642 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d8:ae:06", ip: ""} in network mk-functional-427989: {Iface:virbr1 ExpiryTime:2025-12-13 14:12:29 +0000 UTC Type:0 Mac:52:54:00:d8:ae:06 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:functional-427989 Clientid:01:52:54:00:d8:ae:06}
I1213 13:16:06.679975   26642 main.go:143] libmachine: domain functional-427989 has defined IP address 192.168.39.28 and MAC address 52:54:00:d8:ae:06 in network mk-functional-427989
I1213 13:16:06.680183   26642 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-16298/.minikube/machines/functional-427989/id_rsa Username:docker}
I1213 13:16:06.775131   26642 build_images.go:162] Building image from path: /tmp/build.2374878269.tar
I1213 13:16:06.775200   26642 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 13:16:06.806844   26642 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2374878269.tar
I1213 13:16:06.816635   26642 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2374878269.tar: stat -c "%s %y" /var/lib/minikube/build/build.2374878269.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2374878269.tar': No such file or directory
I1213 13:16:06.816678   26642 ssh_runner.go:362] scp /tmp/build.2374878269.tar --> /var/lib/minikube/build/build.2374878269.tar (3072 bytes)
I1213 13:16:06.861051   26642 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2374878269
I1213 13:16:06.877464   26642 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2374878269 -xf /var/lib/minikube/build/build.2374878269.tar
I1213 13:16:06.897818   26642 docker.go:361] Building image: /var/lib/minikube/build/build.2374878269
I1213 13:16:06.897916   26642 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-427989 /var/lib/minikube/build/build.2374878269
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B 0.0s done
#3 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#4 DONE 0.1s

                                                
                                                
#5 [internal] load build context
#5 transferring context: 62B 0.1s done
#5 DONE 0.2s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#4 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#4 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#4 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.6s done
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#4 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:6a7de2eaf672ddd7fc96ce443b82e139ba1aa8be324787534ccf8b25c7383a00 done
#8 naming to localhost/my-image:functional-427989 0.0s done
#8 DONE 0.1s
I1213 13:16:11.654785   26642 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-427989 /var/lib/minikube/build/build.2374878269: (4.756838412s)
I1213 13:16:11.654861   26642 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2374878269
I1213 13:16:11.691981   26642 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2374878269.tar
I1213 13:16:11.749807   26642 build_images.go:218] Built localhost/my-image:functional-427989 from /tmp/build.2374878269.tar
I1213 13:16:11.749856   26642 build_images.go:134] succeeded building to: functional-427989
I1213 13:16:11.749879   26642 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.537303531s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-427989
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 service list -o json
functional_test.go:1504: Took "841.350816ms" to run "out/minikube-linux-amd64 -p functional-427989 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 image load --daemon kicbase/echo-server:functional-427989 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.28:30439
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.28:30439
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 image load --daemon kicbase/echo-server:functional-427989 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-427989 /tmp/TestFunctionalparallelMountCmdany-port916936833/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765631759676661986" to /tmp/TestFunctionalparallelMountCmdany-port916936833/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765631759676661986" to /tmp/TestFunctionalparallelMountCmdany-port916936833/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765631759676661986" to /tmp/TestFunctionalparallelMountCmdany-port916936833/001/test-1765631759676661986
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427989 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (180.077792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 13:15:59.857107   20230 retry.go:31] will retry after 358.185928ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 13:15 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 13:15 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 13:15 test-1765631759676661986
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh cat /mount-9p/test-1765631759676661986
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-427989 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [42418bc7-591c-456c-83e9-f8a01bd7bb9a] Pending
helpers_test.go:353: "busybox-mount" [42418bc7-591c-456c-83e9-f8a01bd7bb9a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [42418bc7-591c-456c-83e9-f8a01bd7bb9a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [42418bc7-591c-456c-83e9-f8a01bd7bb9a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.005909936s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-427989 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-427989 /tmp/TestFunctionalparallelMountCmdany-port916936833/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-427989
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 image load --daemon kicbase/echo-server:functional-427989 --alsologtostderr
2025/12/13 13:16:01 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 image save kicbase/echo-server:functional-427989 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "355.430224ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "73.672074ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 image rm kicbase/echo-server:functional-427989 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "344.253012ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "72.294182ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-427989
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 image save --daemon kicbase/echo-server:functional-427989 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-427989
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-427989 docker-env) && out/minikube-linux-amd64 status -p functional-427989"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-427989 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-427989 /tmp/TestFunctionalparallelMountCmdspecific-port3639508147/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427989 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (231.71577ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 13:16:10.927151   20230 retry.go:31] will retry after 473.119625ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-427989 /tmp/TestFunctionalparallelMountCmdspecific-port3639508147/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427989 ssh "sudo umount -f /mount-9p": exit status 1 (222.143677ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-427989 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-427989 /tmp/TestFunctionalparallelMountCmdspecific-port3639508147/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-427989 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4005999498/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-427989 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4005999498/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-427989 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4005999498/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-427989 ssh "findmnt -T" /mount1: exit status 1 (221.481807ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 13:16:12.552885   20230 retry.go:31] will retry after 403.321593ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh "findmnt -T" /mount2
I1213 13:16:13.203659   20230 detect.go:223] nested VM detected
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-427989 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-427989 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-427989 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4005999498/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-427989 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4005999498/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-427989 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4005999498/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.27s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-427989
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-427989
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-427989
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22122-16298/.minikube/files/etc/test/nested/copy/20230/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (86.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-690060 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-690060 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --kubernetes-version=v1.35.0-beta.0: (1m26.147452907s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (86.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (59.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1213 13:18:17.997094   20230 config.go:182] Loaded profile config "functional-690060": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-690060 --alsologtostderr -v=8
E1213 13:18:32.500912   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:19:00.209760   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-690060 --alsologtostderr -v=8: (59.089513814s)
functional_test.go:678: soft start took 59.089910037s for "functional-690060" cluster.
I1213 13:19:17.086982   20230 config.go:182] Loaded profile config "functional-690060": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (59.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-690060 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-690060 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach1373257311/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 cache add minikube-local-cache-test:functional-690060
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 cache delete minikube-local-cache-test:functional-690060
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-690060
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-690060 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (189.037729ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 kubectl -- --context functional-690060 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-690060 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (53.77s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-690060 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-690060 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (53.771324983s)
functional_test.go:776: restart took 53.771469401s for "functional-690060" cluster.
I1213 13:20:16.592774   20230 config.go:182] Loaded profile config "functional-690060": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (53.77s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-690060 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-690060 logs: (1.172671139s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1018440853/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-690060 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1018440853/001/logs.txt: (1.172423386s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-690060 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-690060
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-690060: exit status 115 (273.742371ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.158:31255 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-690060 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-690060 config get cpus: exit status 14 (68.413558ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-690060 config get cpus: exit status 14 (70.732983ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (14.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-690060 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-690060 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 30008: os: process already finished
E1213 13:20:50.980701   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (14.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-690060 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-690060 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --kubernetes-version=v1.35.0-beta.0: exit status 23 (137.566632ms)

                                                
                                                
-- stdout --
	* [functional-690060] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-16298/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-16298/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:20:34.441662   29820 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:20:34.441862   29820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:20:34.441876   29820 out.go:374] Setting ErrFile to fd 2...
	I1213 13:20:34.441883   29820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:20:34.442114   29820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
	I1213 13:20:34.442725   29820 out.go:368] Setting JSON to false
	I1213 13:20:34.444072   29820 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":3785,"bootTime":1765628249,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:20:34.444165   29820 start.go:143] virtualization: kvm guest
	I1213 13:20:34.446924   29820 out.go:179] * [functional-690060] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:20:34.448862   29820 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:20:34.448891   29820 notify.go:221] Checking for updates...
	I1213 13:20:34.452660   29820 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:20:34.454663   29820 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-16298/kubeconfig
	I1213 13:20:34.456311   29820 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-16298/.minikube
	I1213 13:20:34.458271   29820 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:20:34.459850   29820 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:20:34.462164   29820 config.go:182] Loaded profile config "functional-690060": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 13:20:34.462748   29820 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:20:34.501223   29820 out.go:179] * Using the kvm2 driver based on existing profile
	I1213 13:20:34.503138   29820 start.go:309] selected driver: kvm2
	I1213 13:20:34.503170   29820 start.go:927] validating driver "kvm2" against &{Name:functional-690060 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-690060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:20:34.503354   29820 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:20:34.506570   29820 out.go:203] 
	W1213 13:20:34.508141   29820 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 13:20:34.509783   29820 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-690060 --dry-run --alsologtostderr -v=1 --driver=kvm2  --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-690060 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-690060 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --kubernetes-version=v1.35.0-beta.0: exit status 23 (136.997439ms)

                                                
                                                
-- stdout --
	* [functional-690060] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-16298/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-16298/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:20:33.864181   29751 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:20:33.864335   29751 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:20:33.864343   29751 out.go:374] Setting ErrFile to fd 2...
	I1213 13:20:33.864351   29751 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:20:33.864762   29751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
	I1213 13:20:33.865865   29751 out.go:368] Setting JSON to false
	I1213 13:20:33.866833   29751 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":3785,"bootTime":1765628249,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:20:33.866928   29751 start.go:143] virtualization: kvm guest
	I1213 13:20:33.868839   29751 out.go:179] * [functional-690060] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1213 13:20:33.870640   29751 notify.go:221] Checking for updates...
	I1213 13:20:33.870672   29751 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:20:33.871990   29751 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:20:33.873518   29751 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-16298/kubeconfig
	I1213 13:20:33.874999   29751 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-16298/.minikube
	I1213 13:20:33.876002   29751 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:20:33.877446   29751 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:20:33.879456   29751 config.go:182] Loaded profile config "functional-690060": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 13:20:33.880075   29751 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:20:33.919898   29751 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1213 13:20:33.921346   29751 start.go:309] selected driver: kvm2
	I1213 13:20:33.921369   29751 start.go:927] validating driver "kvm2" against &{Name:functional-690060 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-690060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:20:33.921538   29751 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:20:33.924234   29751 out.go:203] 
	W1213 13:20:33.925550   29751 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 13:20:33.927102   29751 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (8.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-690060 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-690060 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-vscs7" [7960100d-3848-49b9-9897-78cbc4904d22] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-vscs7" [7960100d-3848-49b9-9897-78cbc4904d22] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.005285575s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.158:31907
functional_test.go:1680: http://192.168.39.158:31907: success! body:
Request served by hello-node-connect-9f67c86d4-vscs7

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.158:31907
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (8.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (30.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [d8bb1584-1c06-4ca9-a40f-bdb7b1a4ab20] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003706414s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-690060 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-690060 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-690060 get pvc myclaim -o=json
I1213 13:20:29.372595   20230 retry.go:31] will retry after 1.379019s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:9694a38d-88aa-4118-9ad1-be1a353897cb ResourceVersion:742 Generation:0 CreationTimestamp:2025-12-13 13:20:29 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0013d6d70 VolumeMode:0xc0013d6d80 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-690060 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-690060 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [3de42d36-d29d-4380-92d3-4fa21f275272] Pending
helpers_test.go:353: "sp-pod" [3de42d36-d29d-4380-92d3-4fa21f275272] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [3de42d36-d29d-4380-92d3-4fa21f275272] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.005176102s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-690060 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-690060 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-690060 delete -f testdata/storage-provisioner/pod.yaml: (1.690385373s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-690060 apply -f testdata/storage-provisioner/pod.yaml
I1213 13:20:45.083387   20230 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [a5b2768e-0795-4304-ad06-028676d12566] Pending
E1213 13:20:45.847582   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:20:45.854132   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:20:45.865617   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:20:45.887696   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:20:45.929242   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:20:46.010882   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "sp-pod" [a5b2768e-0795-4304-ad06-028676d12566] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1213 13:20:46.173119   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:20:46.495034   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "sp-pod" [a5b2768e-0795-4304-ad06-028676d12566] Running
E1213 13:20:47.136815   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.008908456s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-690060 exec sp-pod -- ls /tmp/mount
E1213 13:20:56.102395   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:21:06.343869   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (30.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh -n functional-690060 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 cp functional-690060:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2786177872/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh -n functional-690060 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh -n functional-690060 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (50.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-690060 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-l8d76" [20e583cf-a739-47c4-a71d-611edc75e86a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-l8d76" [20e583cf-a739-47c4-a71d-611edc75e86a] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 40.004295041s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-690060 exec mysql-7d7b65bc95-l8d76 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-690060 exec mysql-7d7b65bc95-l8d76 -- mysql -ppassword -e "show databases;": exit status 1 (182.867271ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 13:21:19.055452   20230 retry.go:31] will retry after 1.334754168s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-690060 exec mysql-7d7b65bc95-l8d76 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-690060 exec mysql-7d7b65bc95-l8d76 -- mysql -ppassword -e "show databases;": exit status 1 (255.139689ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 13:21:20.646120   20230 retry.go:31] will retry after 1.90972643s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-690060 exec mysql-7d7b65bc95-l8d76 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-690060 exec mysql-7d7b65bc95-l8d76 -- mysql -ppassword -e "show databases;": exit status 1 (270.910748ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 13:21:22.827202   20230 retry.go:31] will retry after 2.663867168s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-690060 exec mysql-7d7b65bc95-l8d76 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-690060 exec mysql-7d7b65bc95-l8d76 -- mysql -ppassword -e "show databases;": exit status 1 (252.901819ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 13:21:25.746275   20230 retry.go:31] will retry after 3.557512026s: exit status 1
E1213 13:21:26.825483   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1812: (dbg) Run:  kubectl --context functional-690060 exec mysql-7d7b65bc95-l8d76 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (50.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/20230/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh "sudo cat /etc/test/nested/copy/20230/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/20230.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh "sudo cat /etc/ssl/certs/20230.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/20230.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh "sudo cat /usr/share/ca-certificates/20230.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/202302.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh "sudo cat /etc/ssl/certs/202302.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/202302.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh "sudo cat /usr/share/ca-certificates/202302.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-690060 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-690060 ssh "sudo systemctl is-active crio": exit status 1 (193.609112ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-690060 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-690060
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-690060
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-690060 image ls --format short --alsologtostderr:
I1213 13:20:43.476579   30269 out.go:360] Setting OutFile to fd 1 ...
I1213 13:20:43.477095   30269 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:20:43.477113   30269 out.go:374] Setting ErrFile to fd 2...
I1213 13:20:43.477117   30269 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:20:43.477350   30269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
I1213 13:20:43.478256   30269 config.go:182] Loaded profile config "functional-690060": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 13:20:43.478592   30269 config.go:182] Loaded profile config "functional-690060": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 13:20:43.482510   30269 ssh_runner.go:195] Run: systemctl --version
I1213 13:20:43.486546   30269 main.go:143] libmachine: domain functional-690060 has defined MAC address 52:54:00:89:1f:9c in network mk-functional-690060
I1213 13:20:43.488671   30269 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:89:1f:9c", ip: ""} in network mk-functional-690060: {Iface:virbr1 ExpiryTime:2025-12-13 14:17:08 +0000 UTC Type:0 Mac:52:54:00:89:1f:9c Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:functional-690060 Clientid:01:52:54:00:89:1f:9c}
I1213 13:20:43.488733   30269 main.go:143] libmachine: domain functional-690060 has defined IP address 192.168.39.158 and MAC address 52:54:00:89:1f:9c in network mk-functional-690060
I1213 13:20:43.488979   30269 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-16298/.minikube/machines/functional-690060/id_rsa Username:docker}
I1213 13:20:43.623557   30269 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-690060 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ docker.io/kubernetesui/metrics-scraper      │ <none>            │ 115053965e86b │ 43.8MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0    │ 8a4ded35a3eb1 │ 70.7MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0    │ 45f3cc72d235f │ 75.8MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ docker.io/kubernetesui/dashboard            │ <none>            │ 07655ddf2eebe │ 246MB  │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ localhost/my-image                          │ functional-690060 │ d75ec51708b9d │ 1.24MB │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0    │ aa9d02839d8de │ 89.7MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ docker.io/library/minikube-local-cache-test │ functional-690060 │ 5a4415cad7aa8 │ 30B    │
│ public.ecr.aws/nginx/nginx                  │ alpine            │ a236f84b9d5d2 │ 53.7MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0    │ 7bb6219ddab95 │ 51.7MB │
│ registry.k8s.io/coredns/coredns             │ v1.13.1           │ aa5e3ebc0dfed │ 78.1MB │
│ docker.io/kicbase/echo-server               │ functional-690060 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-690060 image ls --format table --alsologtostderr:
I1213 13:20:48.709349   30360 out.go:360] Setting OutFile to fd 1 ...
I1213 13:20:48.709493   30360 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:20:48.709504   30360 out.go:374] Setting ErrFile to fd 2...
I1213 13:20:48.709508   30360 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:20:48.709772   30360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
I1213 13:20:48.710401   30360 config.go:182] Loaded profile config "functional-690060": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 13:20:48.710517   30360 config.go:182] Loaded profile config "functional-690060": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 13:20:48.712987   30360 ssh_runner.go:195] Run: systemctl --version
I1213 13:20:48.715536   30360 main.go:143] libmachine: domain functional-690060 has defined MAC address 52:54:00:89:1f:9c in network mk-functional-690060
I1213 13:20:48.716022   30360 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:89:1f:9c", ip: ""} in network mk-functional-690060: {Iface:virbr1 ExpiryTime:2025-12-13 14:17:08 +0000 UTC Type:0 Mac:52:54:00:89:1f:9c Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:functional-690060 Clientid:01:52:54:00:89:1f:9c}
I1213 13:20:48.716052   30360 main.go:143] libmachine: domain functional-690060 has defined IP address 192.168.39.158 and MAC address 52:54:00:89:1f:9c in network mk-functional-690060
I1213 13:20:48.716274   30360 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-16298/.minikube/machines/functional-690060/id_rsa Username:docker}
I1213 13:20:48.803591   30360 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-690060 image ls --format json --alsologtostderr:
[{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":[],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"53700000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"89700000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-690060","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"si
ze":"31500000"},{"id":"d75ec51708b9d74bc96ed03e118ade3d533d5ebc1c1e9ad6f4fddab9158fe2d7","repoDigests":[],"repoTags":["localhost/my-image:functional-690060"],"size":"1240000"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"51700000"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"70700000"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"78100000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b3560
91e491b915f9efd6f0d6e5253bc","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"75800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"5a4415cad7aa8d3f2eb549a93d109461c8e071c008961d0141089a929ec6f59a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-690060"],"size":"30"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-690060 image ls --format json --alsologtostderr:
I1213 13:20:48.501419   30349 out.go:360] Setting OutFile to fd 1 ...
I1213 13:20:48.501741   30349 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:20:48.501751   30349 out.go:374] Setting ErrFile to fd 2...
I1213 13:20:48.501755   30349 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:20:48.502023   30349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
I1213 13:20:48.502667   30349 config.go:182] Loaded profile config "functional-690060": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 13:20:48.502780   30349 config.go:182] Loaded profile config "functional-690060": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 13:20:48.505178   30349 ssh_runner.go:195] Run: systemctl --version
I1213 13:20:48.507873   30349 main.go:143] libmachine: domain functional-690060 has defined MAC address 52:54:00:89:1f:9c in network mk-functional-690060
I1213 13:20:48.508600   30349 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:89:1f:9c", ip: ""} in network mk-functional-690060: {Iface:virbr1 ExpiryTime:2025-12-13 14:17:08 +0000 UTC Type:0 Mac:52:54:00:89:1f:9c Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:functional-690060 Clientid:01:52:54:00:89:1f:9c}
I1213 13:20:48.508635   30349 main.go:143] libmachine: domain functional-690060 has defined IP address 192.168.39.158 and MAC address 52:54:00:89:1f:9c in network mk-functional-690060
I1213 13:20:48.508817   30349 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-16298/.minikube/machines/functional-690060/id_rsa Username:docker}
I1213 13:20:48.606984   30349 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-690060 image ls --format yaml --alsologtostderr:
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "89700000"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "51700000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 5a4415cad7aa8d3f2eb549a93d109461c8e071c008961d0141089a929ec6f59a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-690060
size: "30"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests: []
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "53700000"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "70700000"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "75800000"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "78100000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-690060
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-690060 image ls --format yaml --alsologtostderr:
I1213 13:20:43.840350   30280 out.go:360] Setting OutFile to fd 1 ...
I1213 13:20:43.840500   30280 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:20:43.840508   30280 out.go:374] Setting ErrFile to fd 2...
I1213 13:20:43.840513   30280 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:20:43.840765   30280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
I1213 13:20:43.841441   30280 config.go:182] Loaded profile config "functional-690060": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 13:20:43.841567   30280 config.go:182] Loaded profile config "functional-690060": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 13:20:43.844556   30280 ssh_runner.go:195] Run: systemctl --version
I1213 13:20:43.848325   30280 main.go:143] libmachine: domain functional-690060 has defined MAC address 52:54:00:89:1f:9c in network mk-functional-690060
I1213 13:20:43.849322   30280 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:89:1f:9c", ip: ""} in network mk-functional-690060: {Iface:virbr1 ExpiryTime:2025-12-13 14:17:08 +0000 UTC Type:0 Mac:52:54:00:89:1f:9c Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:functional-690060 Clientid:01:52:54:00:89:1f:9c}
I1213 13:20:43.849370   30280 main.go:143] libmachine: domain functional-690060 has defined IP address 192.168.39.158 and MAC address 52:54:00:89:1f:9c in network mk-functional-690060
I1213 13:20:43.849746   30280 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-16298/.minikube/machines/functional-690060/id_rsa Username:docker}
I1213 13:20:43.949070   30280 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-690060 ssh pgrep buildkitd: exit status 1 (207.938977ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 image build -t localhost/my-image:functional-690060 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-690060 image build -t localhost/my-image:functional-690060 testdata/build --alsologtostderr: (3.994681095s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-690060 image build -t localhost/my-image:functional-690060 testdata/build --alsologtostderr:
I1213 13:20:44.301293   30301 out.go:360] Setting OutFile to fd 1 ...
I1213 13:20:44.301602   30301 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:20:44.301615   30301 out.go:374] Setting ErrFile to fd 2...
I1213 13:20:44.301620   30301 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:20:44.301831   30301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
I1213 13:20:44.303837   30301 config.go:182] Loaded profile config "functional-690060": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 13:20:44.304688   30301 config.go:182] Loaded profile config "functional-690060": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 13:20:44.307217   30301 ssh_runner.go:195] Run: systemctl --version
I1213 13:20:44.310526   30301 main.go:143] libmachine: domain functional-690060 has defined MAC address 52:54:00:89:1f:9c in network mk-functional-690060
I1213 13:20:44.311152   30301 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:89:1f:9c", ip: ""} in network mk-functional-690060: {Iface:virbr1 ExpiryTime:2025-12-13 14:17:08 +0000 UTC Type:0 Mac:52:54:00:89:1f:9c Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:functional-690060 Clientid:01:52:54:00:89:1f:9c}
I1213 13:20:44.311206   30301 main.go:143] libmachine: domain functional-690060 has defined IP address 192.168.39.158 and MAC address 52:54:00:89:1f:9c in network mk-functional-690060
I1213 13:20:44.311440   30301 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-16298/.minikube/machines/functional-690060/id_rsa Username:docker}
I1213 13:20:44.433623   30301 build_images.go:162] Building image from path: /tmp/build.3878626257.tar
I1213 13:20:44.433699   30301 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 13:20:44.460945   30301 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3878626257.tar
I1213 13:20:44.468904   30301 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3878626257.tar: stat -c "%s %y" /var/lib/minikube/build/build.3878626257.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3878626257.tar': No such file or directory
I1213 13:20:44.468945   30301 ssh_runner.go:362] scp /tmp/build.3878626257.tar --> /var/lib/minikube/build/build.3878626257.tar (3072 bytes)
I1213 13:20:44.534074   30301 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3878626257
I1213 13:20:44.552313   30301 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3878626257 -xf /var/lib/minikube/build/build.3878626257.tar
I1213 13:20:44.572915   30301 docker.go:361] Building image: /var/lib/minikube/build/build.3878626257
I1213 13:20:44.573000   30301 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-690060 /var/lib/minikube/build/build.3878626257
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:d75ec51708b9d74bc96ed03e118ade3d533d5ebc1c1e9ad6f4fddab9158fe2d7 done
#8 naming to localhost/my-image:functional-690060 done
#8 DONE 0.1s
I1213 13:20:48.167719   30301 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-690060 /var/lib/minikube/build/build.3878626257: (3.594697379s)
I1213 13:20:48.167821   30301 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3878626257
I1213 13:20:48.196537   30301 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3878626257.tar
I1213 13:20:48.214547   30301 build_images.go:218] Built localhost/my-image:functional-690060 from /tmp/build.3878626257.tar
I1213 13:20:48.214582   30301 build_images.go:134] succeeded building to: functional-690060
I1213 13:20:48.214587   30301 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 image ls
E1213 13:20:48.418730   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-690060
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (8.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-690060 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3119744337/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765632023268008535" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3119744337/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765632023268008535" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3119744337/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765632023268008535" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3119744337/001/test-1765632023268008535
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-690060 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (177.956667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 13:20:23.446333   20230 retry.go:31] will retry after 644.561575ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 13:20 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 13:20 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 13:20 test-1765632023268008535
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh cat /mount-9p/test-1765632023268008535
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-690060 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [948f6151-d4c6-43f3-b969-3ca23dae9349] Pending
helpers_test.go:353: "busybox-mount" [948f6151-d4c6-43f3-b969-3ca23dae9349] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [948f6151-d4c6-43f3-b969-3ca23dae9349] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [948f6151-d4c6-43f3-b969-3ca23dae9349] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004820575s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-690060 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh stat /mount-9p/created-by-test
I1213 13:20:30.960347   20230 detect.go:223] nested VM detected
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-690060 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3119744337/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (8.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 image load --daemon kicbase/echo-server:functional-690060 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-690060 image load --daemon kicbase/echo-server:functional-690060 --alsologtostderr: (1.040951936s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 image load --daemon kicbase/echo-server:functional-690060 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-690060
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 image load --daemon kicbase/echo-server:functional-690060 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 image save kicbase/echo-server:functional-690060 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 image rm kicbase/echo-server:functional-690060 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-690060
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 image save --daemon kicbase/echo-server:functional-690060 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-690060
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (9.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-690060 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-690060 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-whfnq" [e01c8dcf-1c63-4be5-a373-ce95e69edad6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-whfnq" [e01c8dcf-1c63-4be5-a373-ce95e69edad6] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004781524s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (9.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-690060 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4210347774/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-690060 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (169.07677ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 13:20:31.842385   20230 retry.go:31] will retry after 467.69992ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-690060 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4210347774/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-690060 ssh "sudo umount -f /mount-9p": exit status 1 (232.004835ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-690060 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-690060 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4210347774/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "413.097149ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "68.554811ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "559.789134ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "75.771617ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-690060 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1498717304/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-690060 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1498717304/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-690060 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1498717304/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-690060 ssh "findmnt -T" /mount1: exit status 1 (241.743159ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 13:20:33.459941   20230 retry.go:31] will retry after 272.470237ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-690060 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-690060 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1498717304/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-690060 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1498717304/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-690060 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1498717304/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/bash (0.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-690060 docker-env) && out/minikube-linux-amd64 status -p functional-690060"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-690060 docker-env) && docker images"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/bash (0.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 update-context --alsologtostderr -v=2
2025/12/13 13:20:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-690060 service list: (1.249527704s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-690060 service list -o json: (1.258515929s)
functional_test.go:1504: Took "1.258633336s" to run "out/minikube-linux-amd64 -p functional-690060 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.158:30434
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-690060 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.158:30434
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-690060
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-690060
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-690060
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (203.07s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-126189 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-126189 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m6.526986597s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-126189 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-126189 cache add gcr.io/k8s-minikube/gvisor-addon:2: (5.186071936s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-126189 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-126189 addons enable gvisor: (6.000462507s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:353: "gvisor" [1be61450-99e4-4d6b-ad68-c78f0d63e716] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.007226107s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-126189 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:353: "nginx-gvisor" [f19ec499-0ef8-4ecf-af8f-f37c41e6d007] Pending
helpers_test.go:353: "nginx-gvisor" [f19ec499-0ef8-4ecf-af8f-f37c41e6d007] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-gvisor" [f19ec499-0ef8-4ecf-af8f-f37c41e6d007] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 48.006658672s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-126189
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-126189: (8.65683318s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-126189 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-126189 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (49.392508868s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:353: "gvisor" [1be61450-99e4-4d6b-ad68-c78f0d63e716] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:353: "gvisor" [1be61450-99e4-4d6b-ad68-c78f0d63e716] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.016907406s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:353: "nginx-gvisor" [f19ec499-0ef8-4ecf-af8f-f37c41e6d007] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 6.004835284s
helpers_test.go:176: Cleaning up "gvisor-126189" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-126189
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-126189: (1.033632157s)
--- PASS: TestGvisorAddon (203.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (240.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2 
E1213 13:22:07.787910   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:23:29.709746   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:23:32.496111   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:25:23.076008   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:25:23.082627   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:25:23.094156   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:25:23.115643   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:25:23.157160   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:25:23.239542   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:25:23.401138   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:25:23.723040   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:25:24.364712   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:25:25.646897   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:25:28.208917   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-633485 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2 : (3m59.821821024s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (240.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 kubectl -- rollout status deployment/busybox
E1213 13:25:33.331343   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-633485 kubectl -- rollout status deployment/busybox: (4.540536404s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 kubectl -- exec busybox-7b57f96db7-8hsgc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 kubectl -- exec busybox-7b57f96db7-k2qvv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 kubectl -- exec busybox-7b57f96db7-xf29h -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 kubectl -- exec busybox-7b57f96db7-8hsgc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 kubectl -- exec busybox-7b57f96db7-k2qvv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 kubectl -- exec busybox-7b57f96db7-xf29h -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 kubectl -- exec busybox-7b57f96db7-8hsgc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 kubectl -- exec busybox-7b57f96db7-k2qvv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 kubectl -- exec busybox-7b57f96db7-xf29h -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 kubectl -- exec busybox-7b57f96db7-8hsgc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 kubectl -- exec busybox-7b57f96db7-8hsgc -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 kubectl -- exec busybox-7b57f96db7-k2qvv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 kubectl -- exec busybox-7b57f96db7-k2qvv -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 kubectl -- exec busybox-7b57f96db7-xf29h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 kubectl -- exec busybox-7b57f96db7-xf29h -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 node add --alsologtostderr -v 5
E1213 13:25:43.573472   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:25:45.847787   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:26:04.055185   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:26:13.551588   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-633485 node add --alsologtostderr -v 5: (52.410943322s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-633485 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 cp testdata/cp-test.txt ha-633485:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 cp ha-633485:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1422930440/001/cp-test_ha-633485.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 cp ha-633485:/home/docker/cp-test.txt ha-633485-m02:/home/docker/cp-test_ha-633485_ha-633485-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485-m02 "sudo cat /home/docker/cp-test_ha-633485_ha-633485-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 cp ha-633485:/home/docker/cp-test.txt ha-633485-m03:/home/docker/cp-test_ha-633485_ha-633485-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485-m03 "sudo cat /home/docker/cp-test_ha-633485_ha-633485-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 cp ha-633485:/home/docker/cp-test.txt ha-633485-m04:/home/docker/cp-test_ha-633485_ha-633485-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485-m04 "sudo cat /home/docker/cp-test_ha-633485_ha-633485-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 cp testdata/cp-test.txt ha-633485-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 cp ha-633485-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1422930440/001/cp-test_ha-633485-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 cp ha-633485-m02:/home/docker/cp-test.txt ha-633485:/home/docker/cp-test_ha-633485-m02_ha-633485.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485 "sudo cat /home/docker/cp-test_ha-633485-m02_ha-633485.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 cp ha-633485-m02:/home/docker/cp-test.txt ha-633485-m03:/home/docker/cp-test_ha-633485-m02_ha-633485-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485-m03 "sudo cat /home/docker/cp-test_ha-633485-m02_ha-633485-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 cp ha-633485-m02:/home/docker/cp-test.txt ha-633485-m04:/home/docker/cp-test_ha-633485-m02_ha-633485-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485-m04 "sudo cat /home/docker/cp-test_ha-633485-m02_ha-633485-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 cp testdata/cp-test.txt ha-633485-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 cp ha-633485-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1422930440/001/cp-test_ha-633485-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 cp ha-633485-m03:/home/docker/cp-test.txt ha-633485:/home/docker/cp-test_ha-633485-m03_ha-633485.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485 "sudo cat /home/docker/cp-test_ha-633485-m03_ha-633485.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 cp ha-633485-m03:/home/docker/cp-test.txt ha-633485-m02:/home/docker/cp-test_ha-633485-m03_ha-633485-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485-m02 "sudo cat /home/docker/cp-test_ha-633485-m03_ha-633485-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 cp ha-633485-m03:/home/docker/cp-test.txt ha-633485-m04:/home/docker/cp-test_ha-633485-m03_ha-633485-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485-m04 "sudo cat /home/docker/cp-test_ha-633485-m03_ha-633485-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 cp testdata/cp-test.txt ha-633485-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 cp ha-633485-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1422930440/001/cp-test_ha-633485-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 cp ha-633485-m04:/home/docker/cp-test.txt ha-633485:/home/docker/cp-test_ha-633485-m04_ha-633485.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485 "sudo cat /home/docker/cp-test_ha-633485-m04_ha-633485.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 cp ha-633485-m04:/home/docker/cp-test.txt ha-633485-m02:/home/docker/cp-test_ha-633485-m04_ha-633485-m02.txt
E1213 13:26:45.017209   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485-m02 "sudo cat /home/docker/cp-test_ha-633485-m04_ha-633485-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 cp ha-633485-m04:/home/docker/cp-test.txt ha-633485-m03:/home/docker/cp-test_ha-633485-m04_ha-633485-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 ssh -n ha-633485-m03 "sudo cat /home/docker/cp-test_ha-633485-m04_ha-633485-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-633485 node stop m02 --alsologtostderr -v 5: (13.697168716s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-633485 status --alsologtostderr -v 5: exit status 7 (561.712657ms)

                                                
                                                
-- stdout --
	ha-633485
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-633485-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-633485-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-633485-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:27:00.026426   33441 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:27:00.026533   33441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:27:00.026538   33441 out.go:374] Setting ErrFile to fd 2...
	I1213 13:27:00.026542   33441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:27:00.026765   33441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
	I1213 13:27:00.027002   33441 out.go:368] Setting JSON to false
	I1213 13:27:00.027037   33441 mustload.go:66] Loading cluster: ha-633485
	I1213 13:27:00.027236   33441 notify.go:221] Checking for updates...
	I1213 13:27:00.027516   33441 config.go:182] Loaded profile config "ha-633485": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 13:27:00.027542   33441 status.go:174] checking status of ha-633485 ...
	I1213 13:27:00.030725   33441 status.go:371] ha-633485 host status = "Running" (err=<nil>)
	I1213 13:27:00.030752   33441 host.go:66] Checking if "ha-633485" exists ...
	I1213 13:27:00.035318   33441 main.go:143] libmachine: domain ha-633485 has defined MAC address 52:54:00:ba:ef:ff in network mk-ha-633485
	I1213 13:27:00.036300   33441 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:ef:ff", ip: ""} in network mk-ha-633485: {Iface:virbr1 ExpiryTime:2025-12-13 14:21:46 +0000 UTC Type:0 Mac:52:54:00:ba:ef:ff Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-633485 Clientid:01:52:54:00:ba:ef:ff}
	I1213 13:27:00.036351   33441 main.go:143] libmachine: domain ha-633485 has defined IP address 192.168.39.246 and MAC address 52:54:00:ba:ef:ff in network mk-ha-633485
	I1213 13:27:00.036619   33441 host.go:66] Checking if "ha-633485" exists ...
	I1213 13:27:00.036870   33441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:27:00.040173   33441 main.go:143] libmachine: domain ha-633485 has defined MAC address 52:54:00:ba:ef:ff in network mk-ha-633485
	I1213 13:27:00.040777   33441 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:ef:ff", ip: ""} in network mk-ha-633485: {Iface:virbr1 ExpiryTime:2025-12-13 14:21:46 +0000 UTC Type:0 Mac:52:54:00:ba:ef:ff Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-633485 Clientid:01:52:54:00:ba:ef:ff}
	I1213 13:27:00.040805   33441 main.go:143] libmachine: domain ha-633485 has defined IP address 192.168.39.246 and MAC address 52:54:00:ba:ef:ff in network mk-ha-633485
	I1213 13:27:00.041019   33441 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-16298/.minikube/machines/ha-633485/id_rsa Username:docker}
	I1213 13:27:00.127053   33441 ssh_runner.go:195] Run: systemctl --version
	I1213 13:27:00.134823   33441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:27:00.158345   33441 kubeconfig.go:125] found "ha-633485" server: "https://192.168.39.254:8443"
	I1213 13:27:00.158394   33441 api_server.go:166] Checking apiserver status ...
	I1213 13:27:00.158468   33441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:27:00.187520   33441 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2489/cgroup
	W1213 13:27:00.202138   33441 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2489/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 13:27:00.202208   33441 ssh_runner.go:195] Run: ls
	I1213 13:27:00.210174   33441 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1213 13:27:00.217541   33441 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1213 13:27:00.217573   33441 status.go:463] ha-633485 apiserver status = Running (err=<nil>)
	I1213 13:27:00.217582   33441 status.go:176] ha-633485 status: &{Name:ha-633485 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 13:27:00.217599   33441 status.go:174] checking status of ha-633485-m02 ...
	I1213 13:27:00.219796   33441 status.go:371] ha-633485-m02 host status = "Stopped" (err=<nil>)
	I1213 13:27:00.219833   33441 status.go:384] host is not running, skipping remaining checks
	I1213 13:27:00.219838   33441 status.go:176] ha-633485-m02 status: &{Name:ha-633485-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 13:27:00.219856   33441 status.go:174] checking status of ha-633485-m03 ...
	I1213 13:27:00.221898   33441 status.go:371] ha-633485-m03 host status = "Running" (err=<nil>)
	I1213 13:27:00.221924   33441 host.go:66] Checking if "ha-633485-m03" exists ...
	I1213 13:27:00.225206   33441 main.go:143] libmachine: domain ha-633485-m03 has defined MAC address 52:54:00:16:0f:75 in network mk-ha-633485
	I1213 13:27:00.225760   33441 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:16:0f:75", ip: ""} in network mk-ha-633485: {Iface:virbr1 ExpiryTime:2025-12-13 14:24:04 +0000 UTC Type:0 Mac:52:54:00:16:0f:75 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-633485-m03 Clientid:01:52:54:00:16:0f:75}
	I1213 13:27:00.225803   33441 main.go:143] libmachine: domain ha-633485-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:16:0f:75 in network mk-ha-633485
	I1213 13:27:00.226011   33441 host.go:66] Checking if "ha-633485-m03" exists ...
	I1213 13:27:00.226336   33441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:27:00.228743   33441 main.go:143] libmachine: domain ha-633485-m03 has defined MAC address 52:54:00:16:0f:75 in network mk-ha-633485
	I1213 13:27:00.229134   33441 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:16:0f:75", ip: ""} in network mk-ha-633485: {Iface:virbr1 ExpiryTime:2025-12-13 14:24:04 +0000 UTC Type:0 Mac:52:54:00:16:0f:75 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-633485-m03 Clientid:01:52:54:00:16:0f:75}
	I1213 13:27:00.229176   33441 main.go:143] libmachine: domain ha-633485-m03 has defined IP address 192.168.39.52 and MAC address 52:54:00:16:0f:75 in network mk-ha-633485
	I1213 13:27:00.229308   33441 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-16298/.minikube/machines/ha-633485-m03/id_rsa Username:docker}
	I1213 13:27:00.321926   33441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:27:00.341990   33441 kubeconfig.go:125] found "ha-633485" server: "https://192.168.39.254:8443"
	I1213 13:27:00.342021   33441 api_server.go:166] Checking apiserver status ...
	I1213 13:27:00.342070   33441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:27:00.363841   33441 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2347/cgroup
	W1213 13:27:00.379647   33441 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2347/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 13:27:00.379728   33441 ssh_runner.go:195] Run: ls
	I1213 13:27:00.386565   33441 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1213 13:27:00.392371   33441 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1213 13:27:00.392402   33441 status.go:463] ha-633485-m03 apiserver status = Running (err=<nil>)
	I1213 13:27:00.392430   33441 status.go:176] ha-633485-m03 status: &{Name:ha-633485-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 13:27:00.392450   33441 status.go:174] checking status of ha-633485-m04 ...
	I1213 13:27:00.394940   33441 status.go:371] ha-633485-m04 host status = "Running" (err=<nil>)
	I1213 13:27:00.394976   33441 host.go:66] Checking if "ha-633485-m04" exists ...
	I1213 13:27:00.398314   33441 main.go:143] libmachine: domain ha-633485-m04 has defined MAC address 52:54:00:81:33:d5 in network mk-ha-633485
	I1213 13:27:00.398838   33441 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:81:33:d5", ip: ""} in network mk-ha-633485: {Iface:virbr1 ExpiryTime:2025-12-13 14:25:57 +0000 UTC Type:0 Mac:52:54:00:81:33:d5 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-633485-m04 Clientid:01:52:54:00:81:33:d5}
	I1213 13:27:00.398875   33441 main.go:143] libmachine: domain ha-633485-m04 has defined IP address 192.168.39.31 and MAC address 52:54:00:81:33:d5 in network mk-ha-633485
	I1213 13:27:00.399065   33441 host.go:66] Checking if "ha-633485-m04" exists ...
	I1213 13:27:00.399301   33441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:27:00.401757   33441 main.go:143] libmachine: domain ha-633485-m04 has defined MAC address 52:54:00:81:33:d5 in network mk-ha-633485
	I1213 13:27:00.402211   33441 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:81:33:d5", ip: ""} in network mk-ha-633485: {Iface:virbr1 ExpiryTime:2025-12-13 14:25:57 +0000 UTC Type:0 Mac:52:54:00:81:33:d5 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:ha-633485-m04 Clientid:01:52:54:00:81:33:d5}
	I1213 13:27:00.402235   33441 main.go:143] libmachine: domain ha-633485-m04 has defined IP address 192.168.39.31 and MAC address 52:54:00:81:33:d5 in network mk-ha-633485
	I1213 13:27:00.402515   33441 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-16298/.minikube/machines/ha-633485-m04/id_rsa Username:docker}
	I1213 13:27:00.492164   33441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:27:00.512012   33441 status.go:176] ha-633485-m04 status: &{Name:ha-633485-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (32.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-633485 node start m02 --alsologtostderr -v 5: (31.412561229s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (32.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (167.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 stop --alsologtostderr -v 5
E1213 13:28:06.939633   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-633485 stop --alsologtostderr -v 5: (40.897650749s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 start --wait true --alsologtostderr -v 5
E1213 13:28:32.495777   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:29:55.572676   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-633485 start --wait true --alsologtostderr -v 5: (2m6.809465267s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (167.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 node delete m03 --alsologtostderr -v 5
E1213 13:30:23.075188   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-633485 node delete m03 --alsologtostderr -v 5: (7.526552211s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (8.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (41.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 stop --alsologtostderr -v 5
E1213 13:30:45.847796   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:30:50.781772   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-633485 stop --alsologtostderr -v 5: (40.965499991s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-633485 status --alsologtostderr -v 5: exit status 7 (70.608447ms)

                                                
                                                
-- stdout --
	ha-633485
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-633485-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-633485-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:31:12.127256   35035 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:31:12.127523   35035 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:31:12.127532   35035 out.go:374] Setting ErrFile to fd 2...
	I1213 13:31:12.127536   35035 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:31:12.127752   35035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
	I1213 13:31:12.127935   35035 out.go:368] Setting JSON to false
	I1213 13:31:12.127962   35035 mustload.go:66] Loading cluster: ha-633485
	I1213 13:31:12.128055   35035 notify.go:221] Checking for updates...
	I1213 13:31:12.128434   35035 config.go:182] Loaded profile config "ha-633485": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 13:31:12.128462   35035 status.go:174] checking status of ha-633485 ...
	I1213 13:31:12.130705   35035 status.go:371] ha-633485 host status = "Stopped" (err=<nil>)
	I1213 13:31:12.130726   35035 status.go:384] host is not running, skipping remaining checks
	I1213 13:31:12.130732   35035 status.go:176] ha-633485 status: &{Name:ha-633485 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 13:31:12.130752   35035 status.go:174] checking status of ha-633485-m02 ...
	I1213 13:31:12.132245   35035 status.go:371] ha-633485-m02 host status = "Stopped" (err=<nil>)
	I1213 13:31:12.132267   35035 status.go:384] host is not running, skipping remaining checks
	I1213 13:31:12.132273   35035 status.go:176] ha-633485-m02 status: &{Name:ha-633485-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 13:31:12.132290   35035 status.go:174] checking status of ha-633485-m04 ...
	I1213 13:31:12.133739   35035 status.go:371] ha-633485-m04 host status = "Stopped" (err=<nil>)
	I1213 13:31:12.133758   35035 status.go:384] host is not running, skipping remaining checks
	I1213 13:31:12.133764   35035 status.go:176] ha-633485-m04 status: &{Name:ha-633485-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (41.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (121.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 start --wait true --alsologtostderr -v 5 --driver=kvm2 
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-633485 start --wait true --alsologtostderr -v 5 --driver=kvm2 : (2m0.833888134s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (121.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (86.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 node add --control-plane --alsologtostderr -v 5
E1213 13:33:32.498205   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-633485 node add --control-plane --alsologtostderr -v 5: (1m25.939104293s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-633485 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (86.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (48.89s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-187561 --driver=kvm2 
E1213 13:35:23.075902   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-187561 --driver=kvm2 : (48.894221988s)
--- PASS: TestImageBuild/serial/Setup (48.89s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-187561
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-187561: (2.022459172s)
--- PASS: TestImageBuild/serial/NormalBuild (2.02s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.43s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-187561
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-187561: (1.430693053s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.43s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-187561
image_test.go:133: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-187561: (1.410568897s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.41s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.86s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-187561
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (92.43s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-168434 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2 
E1213 13:35:45.849923   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:37:08.915226   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-168434 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2 : (1m32.42592689s)
--- PASS: TestJSONOutput/start/Command (92.43s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-168434 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-168434 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.21s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-168434 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-168434 --output=json --user=testUser: (8.211734804s)
--- PASS: TestJSONOutput/stop/Command (8.21s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.28s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-158111 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-158111 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (90.089634ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e4f74909-4e59-41c4-82e2-c31491c08f8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-158111] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"74395dcf-d466-4533-8546-b2c11eb05107","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22122"}}
	{"specversion":"1.0","id":"ce8995fc-ea5a-4ad5-998f-89c269307385","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6204ec1e-b738-44c9-a60c-a9ea26bf9bb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22122-16298/kubeconfig"}}
	{"specversion":"1.0","id":"15841f3b-aedf-4b81-9c24-be529aef57b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-16298/.minikube"}}
	{"specversion":"1.0","id":"1db11f04-4b24-408c-b0ee-5bef175d8db9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"90389f29-3b90-45af-8762-a8735ff3e284","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c7509c2c-de99-4a51-bfc6-6284919d2118","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-158111" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-158111
--- PASS: TestErrorJSONOutput (0.28s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (99.9s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-152006 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-152006 --driver=kvm2 : (48.421006183s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-154552 --driver=kvm2 
E1213 13:38:32.499882   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-154552 --driver=kvm2 : (48.631384346s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-152006
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-154552
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-154552" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-154552
helpers_test.go:176: Cleaning up "first-152006" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-152006
--- PASS: TestMinikubeProfile (99.90s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (25.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-987665 --memory=3072 --mount-string /tmp/TestMountStartserial560202932/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-987665 --memory=3072 --mount-string /tmp/TestMountStartserial560202932/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (24.738276704s)
--- PASS: TestMountStart/serial/StartWithMountFirst (25.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-987665 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-987665 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-002383 --memory=3072 --mount-string /tmp/TestMountStartserial560202932/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-002383 --memory=3072 --mount-string /tmp/TestMountStartserial560202932/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (23.632031674s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-002383 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-002383 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.34s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-987665 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-002383 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-002383 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.44s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-002383
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-002383: (1.437404372s)
--- PASS: TestMountStart/serial/Stop (1.44s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.29s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-002383
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-002383: (22.286278956s)
--- PASS: TestMountStart/serial/RestartStopped (23.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-002383 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-002383 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (122.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-888278 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2 
E1213 13:40:23.075983   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:40:45.848585   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:41:46.144046   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-888278 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2 : (2m2.423875905s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (122.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-888278 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-888278 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-888278 -- rollout status deployment/busybox: (4.569928107s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-888278 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-888278 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-888278 -- exec busybox-7b57f96db7-mpmz6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-888278 -- exec busybox-7b57f96db7-rqbfr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-888278 -- exec busybox-7b57f96db7-mpmz6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-888278 -- exec busybox-7b57f96db7-rqbfr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-888278 -- exec busybox-7b57f96db7-mpmz6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-888278 -- exec busybox-7b57f96db7-rqbfr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.56s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-888278 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-888278 -- exec busybox-7b57f96db7-mpmz6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-888278 -- exec busybox-7b57f96db7-mpmz6 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-888278 -- exec busybox-7b57f96db7-rqbfr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-888278 -- exec busybox-7b57f96db7-rqbfr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.11s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (55.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-888278 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-888278 -v=5 --alsologtostderr: (55.38378332s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (55.90s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-888278 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.53s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 cp testdata/cp-test.txt multinode-888278:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 ssh -n multinode-888278 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 cp multinode-888278:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile601881458/001/cp-test_multinode-888278.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 ssh -n multinode-888278 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 cp multinode-888278:/home/docker/cp-test.txt multinode-888278-m02:/home/docker/cp-test_multinode-888278_multinode-888278-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 ssh -n multinode-888278 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 ssh -n multinode-888278-m02 "sudo cat /home/docker/cp-test_multinode-888278_multinode-888278-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 cp multinode-888278:/home/docker/cp-test.txt multinode-888278-m03:/home/docker/cp-test_multinode-888278_multinode-888278-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 ssh -n multinode-888278 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 ssh -n multinode-888278-m03 "sudo cat /home/docker/cp-test_multinode-888278_multinode-888278-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 cp testdata/cp-test.txt multinode-888278-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 ssh -n multinode-888278-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 cp multinode-888278-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile601881458/001/cp-test_multinode-888278-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 ssh -n multinode-888278-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 cp multinode-888278-m02:/home/docker/cp-test.txt multinode-888278:/home/docker/cp-test_multinode-888278-m02_multinode-888278.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 ssh -n multinode-888278-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 ssh -n multinode-888278 "sudo cat /home/docker/cp-test_multinode-888278-m02_multinode-888278.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 cp multinode-888278-m02:/home/docker/cp-test.txt multinode-888278-m03:/home/docker/cp-test_multinode-888278-m02_multinode-888278-m03.txt
E1213 13:43:32.495971   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 ssh -n multinode-888278-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 ssh -n multinode-888278-m03 "sudo cat /home/docker/cp-test_multinode-888278-m02_multinode-888278-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 cp testdata/cp-test.txt multinode-888278-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 ssh -n multinode-888278-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 cp multinode-888278-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile601881458/001/cp-test_multinode-888278-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 ssh -n multinode-888278-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 cp multinode-888278-m03:/home/docker/cp-test.txt multinode-888278:/home/docker/cp-test_multinode-888278-m03_multinode-888278.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 ssh -n multinode-888278-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 ssh -n multinode-888278 "sudo cat /home/docker/cp-test_multinode-888278-m03_multinode-888278.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 cp multinode-888278-m03:/home/docker/cp-test.txt multinode-888278-m02:/home/docker/cp-test_multinode-888278-m03_multinode-888278-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 ssh -n multinode-888278-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 ssh -n multinode-888278-m02 "sudo cat /home/docker/cp-test_multinode-888278-m03_multinode-888278-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-888278 node stop m03: (1.895006633s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-888278 status: exit status 7 (389.048763ms)

                                                
                                                
-- stdout --
	multinode-888278
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-888278-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-888278-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-888278 status --alsologtostderr: exit status 7 (393.631382ms)

                                                
                                                
-- stdout --
	multinode-888278
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-888278-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-888278-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:43:37.651991   41443 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:43:37.652166   41443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:43:37.652180   41443 out.go:374] Setting ErrFile to fd 2...
	I1213 13:43:37.652188   41443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:43:37.652548   41443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
	I1213 13:43:37.652777   41443 out.go:368] Setting JSON to false
	I1213 13:43:37.652809   41443 mustload.go:66] Loading cluster: multinode-888278
	I1213 13:43:37.652986   41443 notify.go:221] Checking for updates...
	I1213 13:43:37.653169   41443 config.go:182] Loaded profile config "multinode-888278": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 13:43:37.653182   41443 status.go:174] checking status of multinode-888278 ...
	I1213 13:43:37.656383   41443 status.go:371] multinode-888278 host status = "Running" (err=<nil>)
	I1213 13:43:37.656426   41443 host.go:66] Checking if "multinode-888278" exists ...
	I1213 13:43:37.660259   41443 main.go:143] libmachine: domain multinode-888278 has defined MAC address 52:54:00:34:95:9d in network mk-multinode-888278
	I1213 13:43:37.661256   41443 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:34:95:9d", ip: ""} in network mk-multinode-888278: {Iface:virbr1 ExpiryTime:2025-12-13 14:40:37 +0000 UTC Type:0 Mac:52:54:00:34:95:9d Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-888278 Clientid:01:52:54:00:34:95:9d}
	I1213 13:43:37.661313   41443 main.go:143] libmachine: domain multinode-888278 has defined IP address 192.168.39.220 and MAC address 52:54:00:34:95:9d in network mk-multinode-888278
	I1213 13:43:37.661600   41443 host.go:66] Checking if "multinode-888278" exists ...
	I1213 13:43:37.661947   41443 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:43:37.664991   41443 main.go:143] libmachine: domain multinode-888278 has defined MAC address 52:54:00:34:95:9d in network mk-multinode-888278
	I1213 13:43:37.665576   41443 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:34:95:9d", ip: ""} in network mk-multinode-888278: {Iface:virbr1 ExpiryTime:2025-12-13 14:40:37 +0000 UTC Type:0 Mac:52:54:00:34:95:9d Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-888278 Clientid:01:52:54:00:34:95:9d}
	I1213 13:43:37.665609   41443 main.go:143] libmachine: domain multinode-888278 has defined IP address 192.168.39.220 and MAC address 52:54:00:34:95:9d in network mk-multinode-888278
	I1213 13:43:37.665811   41443 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-16298/.minikube/machines/multinode-888278/id_rsa Username:docker}
	I1213 13:43:37.749576   41443 ssh_runner.go:195] Run: systemctl --version
	I1213 13:43:37.767125   41443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:43:37.790620   41443 kubeconfig.go:125] found "multinode-888278" server: "https://192.168.39.220:8443"
	I1213 13:43:37.790678   41443 api_server.go:166] Checking apiserver status ...
	I1213 13:43:37.790734   41443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:43:37.814373   41443 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2509/cgroup
	W1213 13:43:37.829368   41443 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2509/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 13:43:37.829518   41443 ssh_runner.go:195] Run: ls
	I1213 13:43:37.836784   41443 api_server.go:253] Checking apiserver healthz at https://192.168.39.220:8443/healthz ...
	I1213 13:43:37.845607   41443 api_server.go:279] https://192.168.39.220:8443/healthz returned 200:
	ok
	I1213 13:43:37.845637   41443 status.go:463] multinode-888278 apiserver status = Running (err=<nil>)
	I1213 13:43:37.845649   41443 status.go:176] multinode-888278 status: &{Name:multinode-888278 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 13:43:37.845676   41443 status.go:174] checking status of multinode-888278-m02 ...
	I1213 13:43:37.847549   41443 status.go:371] multinode-888278-m02 host status = "Running" (err=<nil>)
	I1213 13:43:37.847577   41443 host.go:66] Checking if "multinode-888278-m02" exists ...
	I1213 13:43:37.850938   41443 main.go:143] libmachine: domain multinode-888278-m02 has defined MAC address 52:54:00:a3:fd:83 in network mk-multinode-888278
	I1213 13:43:37.851401   41443 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a3:fd:83", ip: ""} in network mk-multinode-888278: {Iface:virbr1 ExpiryTime:2025-12-13 14:41:49 +0000 UTC Type:0 Mac:52:54:00:a3:fd:83 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:multinode-888278-m02 Clientid:01:52:54:00:a3:fd:83}
	I1213 13:43:37.851484   41443 main.go:143] libmachine: domain multinode-888278-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:a3:fd:83 in network mk-multinode-888278
	I1213 13:43:37.851729   41443 host.go:66] Checking if "multinode-888278-m02" exists ...
	I1213 13:43:37.852142   41443 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:43:37.855180   41443 main.go:143] libmachine: domain multinode-888278-m02 has defined MAC address 52:54:00:a3:fd:83 in network mk-multinode-888278
	I1213 13:43:37.855901   41443 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a3:fd:83", ip: ""} in network mk-multinode-888278: {Iface:virbr1 ExpiryTime:2025-12-13 14:41:49 +0000 UTC Type:0 Mac:52:54:00:a3:fd:83 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:multinode-888278-m02 Clientid:01:52:54:00:a3:fd:83}
	I1213 13:43:37.855934   41443 main.go:143] libmachine: domain multinode-888278-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:a3:fd:83 in network mk-multinode-888278
	I1213 13:43:37.856138   41443 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-16298/.minikube/machines/multinode-888278-m02/id_rsa Username:docker}
	I1213 13:43:37.946718   41443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:43:37.969081   41443 status.go:176] multinode-888278-m02 status: &{Name:multinode-888278-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1213 13:43:37.969118   41443 status.go:174] checking status of multinode-888278-m03 ...
	I1213 13:43:37.971124   41443 status.go:371] multinode-888278-m03 host status = "Stopped" (err=<nil>)
	I1213 13:43:37.971154   41443 status.go:384] host is not running, skipping remaining checks
	I1213 13:43:37.971160   41443 status.go:176] multinode-888278-m03 status: &{Name:multinode-888278-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.68s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (47.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-888278 node start m03 -v=5 --alsologtostderr: (46.894567889s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (47.48s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (200.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-888278
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-888278
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-888278: (29.963945347s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-888278 --wait=true -v=5 --alsologtostderr
E1213 13:45:23.075525   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:45:45.848321   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:46:35.575117   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-888278 --wait=true -v=5 --alsologtostderr: (2m50.801471189s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-888278
--- PASS: TestMultiNode/serial/RestartKeepsNodes (200.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-888278 node delete m03: (1.952863326s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.50s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (27.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-888278 stop: (27.692024556s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-888278 status: exit status 7 (67.971666ms)

                                                
                                                
-- stdout --
	multinode-888278
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-888278-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-888278 status --alsologtostderr: exit status 7 (67.519519ms)

                                                
                                                
-- stdout --
	multinode-888278
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-888278-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:48:16.680812   42979 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:48:16.680974   42979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:48:16.680985   42979 out.go:374] Setting ErrFile to fd 2...
	I1213 13:48:16.680989   42979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:48:16.681197   42979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
	I1213 13:48:16.681445   42979 out.go:368] Setting JSON to false
	I1213 13:48:16.681476   42979 mustload.go:66] Loading cluster: multinode-888278
	I1213 13:48:16.681560   42979 notify.go:221] Checking for updates...
	I1213 13:48:16.682009   42979 config.go:182] Loaded profile config "multinode-888278": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 13:48:16.682031   42979 status.go:174] checking status of multinode-888278 ...
	I1213 13:48:16.684484   42979 status.go:371] multinode-888278 host status = "Stopped" (err=<nil>)
	I1213 13:48:16.684508   42979 status.go:384] host is not running, skipping remaining checks
	I1213 13:48:16.684514   42979 status.go:176] multinode-888278 status: &{Name:multinode-888278 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 13:48:16.684538   42979 status.go:174] checking status of multinode-888278-m02 ...
	I1213 13:48:16.685830   42979 status.go:371] multinode-888278-m02 host status = "Stopped" (err=<nil>)
	I1213 13:48:16.685846   42979 status.go:384] host is not running, skipping remaining checks
	I1213 13:48:16.685851   42979 status.go:176] multinode-888278-m02 status: &{Name:multinode-888278-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (27.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (131.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-888278 --wait=true -v=5 --alsologtostderr --driver=kvm2 
E1213 13:48:32.495992   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:50:23.075459   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-888278 --wait=true -v=5 --alsologtostderr --driver=kvm2 : (2m11.41615491s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-888278 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (131.94s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (49.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-888278
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-888278-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-888278-m02 --driver=kvm2 : exit status 14 (94.001529ms)

                                                
                                                
-- stdout --
	* [multinode-888278-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-16298/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-16298/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-888278-m02' is duplicated with machine name 'multinode-888278-m02' in profile 'multinode-888278'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-888278-m03 --driver=kvm2 
E1213 13:50:45.849690   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-888278-m03 --driver=kvm2 : (48.11247578s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-888278
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-888278: exit status 80 (242.260842ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-888278 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-888278-m03 already exists in multinode-888278-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-888278-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (49.40s)

                                                
                                    
x
+
TestPreload (165.21s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-611905 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2 
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-611905 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2 : (1m38.797412521s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-611905 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-611905 image pull gcr.io/k8s-minikube/busybox: (2.151806726s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-611905
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-611905: (12.80343737s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-611905 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2 
E1213 13:53:32.495189   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:53:48.916651   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-611905 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2 : (50.397605089s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-611905 image list
helpers_test.go:176: Cleaning up "test-preload-611905" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-611905
--- PASS: TestPreload (165.21s)

                                                
                                    
x
+
TestScheduledStopUnix (119.7s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-586694 --memory=3072 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-586694 --memory=3072 --driver=kvm2 : (47.832673608s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-586694 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 13:54:52.739970   45449 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:54:52.740304   45449 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:54:52.740316   45449 out.go:374] Setting ErrFile to fd 2...
	I1213 13:54:52.740320   45449 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:54:52.740598   45449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
	I1213 13:54:52.740941   45449 out.go:368] Setting JSON to false
	I1213 13:54:52.741049   45449 mustload.go:66] Loading cluster: scheduled-stop-586694
	I1213 13:54:52.741418   45449 config.go:182] Loaded profile config "scheduled-stop-586694": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 13:54:52.741509   45449 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/scheduled-stop-586694/config.json ...
	I1213 13:54:52.741744   45449 mustload.go:66] Loading cluster: scheduled-stop-586694
	I1213 13:54:52.741872   45449 config.go:182] Loaded profile config "scheduled-stop-586694": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-586694 -n scheduled-stop-586694
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-586694 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 13:54:53.083456   45493 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:54:53.083577   45493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:54:53.083582   45493 out.go:374] Setting ErrFile to fd 2...
	I1213 13:54:53.083586   45493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:54:53.083830   45493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
	I1213 13:54:53.084116   45493 out.go:368] Setting JSON to false
	I1213 13:54:53.084335   45493 daemonize_unix.go:73] killing process 45482 as it is an old scheduled stop
	I1213 13:54:53.084477   45493 mustload.go:66] Loading cluster: scheduled-stop-586694
	I1213 13:54:53.084820   45493 config.go:182] Loaded profile config "scheduled-stop-586694": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 13:54:53.084922   45493 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/scheduled-stop-586694/config.json ...
	I1213 13:54:53.085131   45493 mustload.go:66] Loading cluster: scheduled-stop-586694
	I1213 13:54:53.085234   45493 config.go:182] Loaded profile config "scheduled-stop-586694": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1213 13:54:53.089730   20230 retry.go:31] will retry after 74.815µs: open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/scheduled-stop-586694/pid: no such file or directory
I1213 13:54:53.090950   20230 retry.go:31] will retry after 146.778µs: open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/scheduled-stop-586694/pid: no such file or directory
I1213 13:54:53.092200   20230 retry.go:31] will retry after 245.198µs: open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/scheduled-stop-586694/pid: no such file or directory
I1213 13:54:53.093435   20230 retry.go:31] will retry after 181.497µs: open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/scheduled-stop-586694/pid: no such file or directory
I1213 13:54:53.094644   20230 retry.go:31] will retry after 412.444µs: open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/scheduled-stop-586694/pid: no such file or directory
I1213 13:54:53.095988   20230 retry.go:31] will retry after 1.00521ms: open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/scheduled-stop-586694/pid: no such file or directory
I1213 13:54:53.097233   20230 retry.go:31] will retry after 1.021429ms: open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/scheduled-stop-586694/pid: no such file or directory
I1213 13:54:53.098446   20230 retry.go:31] will retry after 1.145262ms: open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/scheduled-stop-586694/pid: no such file or directory
I1213 13:54:53.100710   20230 retry.go:31] will retry after 2.147285ms: open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/scheduled-stop-586694/pid: no such file or directory
I1213 13:54:53.102988   20230 retry.go:31] will retry after 4.798857ms: open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/scheduled-stop-586694/pid: no such file or directory
I1213 13:54:53.108304   20230 retry.go:31] will retry after 6.409247ms: open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/scheduled-stop-586694/pid: no such file or directory
I1213 13:54:53.115645   20230 retry.go:31] will retry after 9.281377ms: open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/scheduled-stop-586694/pid: no such file or directory
I1213 13:54:53.125313   20230 retry.go:31] will retry after 16.140488ms: open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/scheduled-stop-586694/pid: no such file or directory
I1213 13:54:53.142637   20230 retry.go:31] will retry after 16.290413ms: open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/scheduled-stop-586694/pid: no such file or directory
I1213 13:54:53.159948   20230 retry.go:31] will retry after 31.172399ms: open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/scheduled-stop-586694/pid: no such file or directory
I1213 13:54:53.191271   20230 retry.go:31] will retry after 22.927299ms: open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/scheduled-stop-586694/pid: no such file or directory
I1213 13:54:53.214654   20230 retry.go:31] will retry after 97.050906ms: open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/scheduled-stop-586694/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-586694 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-586694 -n scheduled-stop-586694
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-586694
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-586694 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 13:55:18.956178   45641 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:55:18.956602   45641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:55:18.956622   45641 out.go:374] Setting ErrFile to fd 2...
	I1213 13:55:18.956630   45641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:55:18.957046   45641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-16298/.minikube/bin
	I1213 13:55:18.957455   45641 out.go:368] Setting JSON to false
	I1213 13:55:18.957545   45641 mustload.go:66] Loading cluster: scheduled-stop-586694
	I1213 13:55:18.957897   45641 config.go:182] Loaded profile config "scheduled-stop-586694": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 13:55:18.957981   45641 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/scheduled-stop-586694/config.json ...
	I1213 13:55:18.958191   45641 mustload.go:66] Loading cluster: scheduled-stop-586694
	I1213 13:55:18.958288   45641 config.go:182] Loaded profile config "scheduled-stop-586694": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
E1213 13:55:23.075453   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1213 13:55:45.850094   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-586694
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-586694: exit status 7 (70.296118ms)

                                                
                                                
-- stdout --
	scheduled-stop-586694
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-586694 -n scheduled-stop-586694
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-586694 -n scheduled-stop-586694: exit status 7 (66.16406ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-586694" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-586694
--- PASS: TestScheduledStopUnix (119.70s)

                                                
                                    
x
+
TestSkaffold (143.17s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2295137626 version
skaffold_test.go:63: skaffold version: v2.17.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-458596 --memory=3072 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-458596 --memory=3072 --driver=kvm2 : (46.632339482s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2295137626 run --minikube-profile skaffold-458596 --kube-context skaffold-458596 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2295137626 run --minikube-profile skaffold-458596 --kube-context skaffold-458596 --status-check=true --port-forward=false --interactive=false: (1m23.635443349s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:353: "leeroy-app-587d69cd94-mpgbp" [d5a790d2-fdea-4792-8234-9fe6746604a9] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004458767s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:353: "leeroy-web-6c44cdfd55-gqmt5" [f46e402f-9da4-4d27-8b61-8d6a7280ff53] Running
E1213 13:58:26.146458   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003416278s
helpers_test.go:176: Cleaning up "skaffold-458596" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-458596
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-458596: (1.082969057s)
--- PASS: TestSkaffold (143.17s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (474.55s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.4097736207 start -p running-upgrade-757318 --memory=3072 --vm-driver=kvm2 
E1213 13:58:32.495653   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.4097736207 start -p running-upgrade-757318 --memory=3072 --vm-driver=kvm2 : (2m0.780615039s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-757318 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 
E1213 14:00:45.848604   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-757318 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 : (5m51.889098029s)
helpers_test.go:176: Cleaning up "running-upgrade-757318" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-757318
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-757318: (1.059785355s)
--- PASS: TestRunningBinaryUpgrade (474.55s)

                                                
                                    
x
+
TestKubernetesUpgrade (195.63s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-666684 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-666684 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2 : (50.54961773s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-666684
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-666684: (14.506038892s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-666684 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-666684 status --format={{.Host}}: exit status 7 (66.932019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-666684 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2 
E1213 14:04:37.543838   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/skaffold-458596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-666684 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2 : (49.029457166s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-666684 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-666684 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-666684 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2 : exit status 106 (82.97219ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-666684] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-16298/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-16298/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-666684
	    minikube start -p kubernetes-upgrade-666684 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6666842 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-666684 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-666684 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-666684 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2 : (1m20.128110084s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-666684" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-666684
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-666684: (1.194421293s)
--- PASS: TestKubernetesUpgrade (195.63s)

                                                
                                    
x
+
TestPause/serial/Start (129.44s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-629145 --memory=3072 --install-addons=false --wait=all --driver=kvm2 
E1213 14:00:23.075328   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-629145 --memory=3072 --install-addons=false --wait=all --driver=kvm2 : (2m9.443959658s)
--- PASS: TestPause/serial/Start (129.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-772555 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2 
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-772555 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2 : exit status 14 (111.55372ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-772555] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-16298/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-16298/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (59.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-772555 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-772555 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (58.77772823s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-772555 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (59.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (15.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-772555 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-772555 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (14.532495822s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-772555 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-772555 status -o json: exit status 2 (235.174196ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-772555","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-772555
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (15.63s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (57.83s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-629145 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-629145 --alsologtostderr -v=1 --driver=kvm2 : (57.798088553s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (57.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (24.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-772555 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-772555 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (24.950948355s)
--- PASS: TestNoKubernetes/serial/Start (24.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22122-16298/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-772555 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-772555 "sudo systemctl is-active --quiet service kubelet": exit status 1 (213.831854ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (21.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (20.859640373s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (21.60s)

                                                
                                    
x
+
TestPause/serial/Pause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-629145 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.70s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-629145 --output=json --layout=cluster
E1213 14:03:15.577174   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:03:15.602777   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/skaffold-458596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:03:15.609388   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/skaffold-458596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:03:15.620957   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/skaffold-458596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:03:15.642517   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/skaffold-458596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:03:15.684096   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/skaffold-458596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-629145 --output=json --layout=cluster: exit status 2 (242.661512ms)

                                                
                                                
-- stdout --
	{"Name":"pause-629145","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-629145","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-629145 --alsologtostderr -v=5
E1213 14:03:15.766300   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/skaffold-458596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:03:15.928649   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/skaffold-458596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:03:16.250384   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/skaffold-458596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.94s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-629145 --alsologtostderr -v=5
E1213 14:03:16.891876   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/skaffold-458596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestPause/serial/PauseAgain (0.94s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.88s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-629145 --alsologtostderr -v=5
E1213 14:03:18.173962   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/skaffold-458596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestPause/serial/DeletePaused (0.88s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.68s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-772555
E1213 14:03:20.735994   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/skaffold-458596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-772555: (1.779235725s)
--- PASS: TestNoKubernetes/serial/Stop (1.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (45.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-772555 --driver=kvm2 
E1213 14:03:25.857389   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/skaffold-458596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:03:32.495679   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:03:36.099306   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/skaffold-458596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-772555 --driver=kvm2 : (45.077626102s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (45.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-772555 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-772555 "sudo systemctl is-active --quiet service kubelet": exit status 1 (185.765922ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestISOImage/Setup (26.35s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-719825 --no-kubernetes --driver=kvm2 
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-719825 --no-kubernetes --driver=kvm2 : (26.352713386s)
--- PASS: TestISOImage/Setup (26.35s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-719825 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-719825 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-719825 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-719825 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-719825 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-719825 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-719825 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-719825 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.22s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-719825 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.22s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-719825 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-719825 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (144.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2081996082 start -p stopped-upgrade-995013 --memory=3072 --vm-driver=kvm2 
E1213 14:05:23.075378   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:05:45.848005   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:05:59.466696   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/skaffold-458596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2081996082 start -p stopped-upgrade-995013 --memory=3072 --vm-driver=kvm2 : (1m15.221835208s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2081996082 -p stopped-upgrade-995013 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2081996082 -p stopped-upgrade-995013 stop: (14.616418324s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-995013 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-995013 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 : (54.808168223s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (144.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (94.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-378767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-378767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m34.741299964s)
--- PASS: TestNetworkPlugins/group/auto/Start (94.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (91.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-378767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-378767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m31.628499134s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (91.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-995013
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-995013: (1.752297213s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (114s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-378767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-378767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m54.00437966s)
--- PASS: TestNetworkPlugins/group/calico/Start (114.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-378767 "pgrep -a kubelet"
I1213 14:07:57.265262   20230 config.go:182] Loaded profile config "auto-378767": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-378767 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-vnwt7" [53267c8c-86e6-4998-89ca-4dbf0daaf4c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-vnwt7" [53267c8c-86e6-4998-89ca-4dbf0daaf4c0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.005371901s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-scbgm" [ee7b4d0b-45cb-4e7a-aba3-5ff21c7096e8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006025637s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-378767 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-378767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-378767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-378767 "pgrep -a kubelet"
I1213 14:08:12.620388   20230 config.go:182] Loaded profile config "kindnet-378767": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-378767 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-zkzg5" [eb735d57-6417-47a5-ab40-7ca56db8095a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 14:08:15.602681   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/skaffold-458596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-zkzg5" [eb735d57-6417-47a5-ab40-7ca56db8095a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.007023762s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-378767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-378767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m10.388866776s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-378767 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-378767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-378767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (113.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-378767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E1213 14:08:32.495301   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-378767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m53.514852757s)
--- PASS: TestNetworkPlugins/group/false/Start (113.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (124.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-378767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E1213 14:08:43.308178   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/skaffold-458596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-378767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (2m4.961382397s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (124.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-378767 "pgrep -a kubelet"
I1213 14:09:33.363876   20230 config.go:182] Loaded profile config "custom-flannel-378767": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-378767 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-ss8hl" [2c8faf83-c8e4-41a6-971c-98ae3623d977] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-ss8hl" [2c8faf83-c8e4-41a6-971c-98ae3623d977] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.006871027s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-v9vsz" [cf36f8f7-af39-45b8-87ac-0c1fc8320b78] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004163549s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-378767 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-378767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-378767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-378767 "pgrep -a kubelet"
I1213 14:09:50.081593   20230 config.go:182] Loaded profile config "calico-378767": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-378767 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-vplz5" [3af955fe-fd05-4c2d-ba5a-ec074d2d2891] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-vplz5" [3af955fe-fd05-4c2d-ba5a-ec074d2d2891] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.006439367s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-378767 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (72.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-378767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-378767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m12.709271426s)
--- PASS: TestNetworkPlugins/group/flannel/Start (72.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-378767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-378767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-378767 "pgrep -a kubelet"
I1213 14:10:21.734875   20230 config.go:182] Loaded profile config "false-378767": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-378767 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-bpqhn" [037fdf1b-0a71-4286-961b-f518281234b4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-bpqhn" [037fdf1b-0a71-4286-961b-f518281234b4] Running
E1213 14:10:28.918857   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-427989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.006496382s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (71.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-378767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E1213 14:10:23.075823   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-378767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m11.446841059s)
--- PASS: TestNetworkPlugins/group/bridge/Start (71.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-378767 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-378767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-378767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-378767 "pgrep -a kubelet"
I1213 14:10:48.138607   20230 config.go:182] Loaded profile config "enable-default-cni-378767": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-378767 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-cspgr" [0e29451a-0972-40e5-95dc-1578cd578f81] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-cspgr" [0e29451a-0972-40e5-95dc-1578cd578f81] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.007523131s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (99.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-378767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-378767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m39.892810961s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (99.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-378767 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-378767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-378767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-4q77v" [74f84ff8-1fb6-473b-846c-42c45a1ed14c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006392131s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (109.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-275041 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0
E1213 14:11:18.677593   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/gvisor-126189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:11:19.959554   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/gvisor-126189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-275041 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0: (1m49.302032813s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (109.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-378767 "pgrep -a kubelet"
E1213 14:11:22.521567   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/gvisor-126189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1213 14:11:22.584000   20230 config.go:182] Loaded profile config "flannel-378767": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-378767 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-rcr4z" [5d0fc2ef-1771-4460-8885-fbe9fd2da15c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 14:11:27.643398   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/gvisor-126189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-rcr4z" [5d0fc2ef-1771-4460-8885-fbe9fd2da15c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.006874273s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-378767 "pgrep -a kubelet"
I1213 14:11:34.196024   20230 config.go:182] Loaded profile config "bridge-378767": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-378767 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-7j69v" [ed7df543-fb40-4ca2-892a-d45cafa7866b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-7j69v" [ed7df543-fb40-4ca2-892a-d45cafa7866b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.009244821s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-378767 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-378767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-378767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-378767 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-378767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-378767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (94.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-228187 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.2
E1213 14:11:58.367975   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/gvisor-126189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-228187 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.2: (1m34.558885266s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (94.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (117.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-480987 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-480987 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.35.0-beta.0: (1m57.891281015s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (117.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-378767 "pgrep -a kubelet"
I1213 14:12:30.580235   20230 config.go:182] Loaded profile config "kubenet-378767": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-378767 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-xx2mx" [afc71a95-bb8e-44f1-b463-60677d031d34] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-xx2mx" [afc71a95-bb8e-44f1-b463-60677d031d34] Running
E1213 14:12:39.329318   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/gvisor-126189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.004033789s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-378767 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-378767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-378767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (92.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-064268 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.2
E1213 14:13:02.760551   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/auto-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:13:06.371969   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/kindnet-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:13:06.378487   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/kindnet-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:13:06.389988   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/kindnet-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:13:06.411531   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/kindnet-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:13:06.453510   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/kindnet-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:13:06.535270   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/kindnet-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:13:06.696946   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/kindnet-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:13:07.019046   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/kindnet-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-064268 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.2: (1m32.160635849s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (92.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-275041 create -f testdata/busybox.yaml
E1213 14:13:07.660785   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/kindnet-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [886bd0e8-48ab-492d-83a4-6da5a2169fd6] Pending
E1213 14:13:07.883462   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/auto-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [886bd0e8-48ab-492d-83a4-6da5a2169fd6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1213 14:13:08.943235   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/kindnet-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:13:11.505859   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/kindnet-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [886bd0e8-48ab-492d-83a4-6da5a2169fd6] Running
E1213 14:13:15.602844   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/skaffold-458596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:13:16.628202   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/kindnet-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.006191693s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-275041 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-275041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1213 14:13:18.125309   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/auto-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-275041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.315570219s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-275041 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (14.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-275041 --alsologtostderr -v=3
E1213 14:13:26.870575   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/kindnet-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-275041 --alsologtostderr -v=3: (14.66554936s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (14.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-228187 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0f154541-4008-451b-9e74-8a99b53b9810] Pending
E1213 14:13:32.495840   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/addons-597924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [0f154541-4008-451b-9e74-8a99b53b9810] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0f154541-4008-451b-9e74-8a99b53b9810] Running
E1213 14:13:38.607322   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/auto-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004632878s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-228187 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275041 -n old-k8s-version-275041
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275041 -n old-k8s-version-275041: exit status 7 (74.345284ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-275041 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (54.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-275041 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-275041 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0: (53.864727443s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275041 -n old-k8s-version-275041
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (54.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-228187 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-228187 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.260479286s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-228187 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (14.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-228187 --alsologtostderr -v=3
E1213 14:13:47.351972   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/kindnet-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-228187 --alsologtostderr -v=3: (14.540198192s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (14.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-228187 -n embed-certs-228187
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-228187 -n embed-certs-228187: exit status 7 (88.963852ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-228187 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-228187 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.2
E1213 14:14:01.250776   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/gvisor-126189/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-228187 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.2: (50.912591512s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-228187 -n embed-certs-228187
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-480987 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a9e1cd73-1690-4e21-9476-d5ee242d9090] Pending
helpers_test.go:353: "busybox" [a9e1cd73-1690-4e21-9476-d5ee242d9090] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [a9e1cd73-1690-4e21-9476-d5ee242d9090] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005825695s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-480987 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-480987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-480987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.20349323s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-480987 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-480987 --alsologtostderr -v=3
E1213 14:14:19.569234   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/auto-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-480987 --alsologtostderr -v=3: (13.645861927s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-7lbvs" [c566a92a-08ab-4582-9578-9e01e97338e5] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-7lbvs" [c566a92a-08ab-4582-9578-9e01e97338e5] Running
E1213 14:14:36.225162   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/custom-flannel-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.006863609s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-480987 -n no-preload-480987
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-480987 -n no-preload-480987: exit status 7 (82.209145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-480987 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (50.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-480987 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.35.0-beta.0
E1213 14:14:28.313420   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/kindnet-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-480987 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.35.0-beta.0: (50.068578019s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-480987 -n no-preload-480987
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (50.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-064268 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [f29a966a-dcf0-4cfa-8440-4f32330becef] Pending
E1213 14:14:33.654970   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/custom-flannel-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:14:33.661448   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/custom-flannel-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:14:33.672991   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/custom-flannel-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:14:33.694345   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/custom-flannel-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:14:33.736156   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/custom-flannel-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [f29a966a-dcf0-4cfa-8440-4f32330becef] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1213 14:14:33.817556   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/custom-flannel-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:14:33.979520   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/custom-flannel-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:14:34.301359   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/custom-flannel-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:14:34.943124   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/custom-flannel-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [f29a966a-dcf0-4cfa-8440-4f32330becef] Running
E1213 14:14:38.786801   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/custom-flannel-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.008369115s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-064268 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-7lbvs" [c566a92a-08ab-4582-9578-9e01e97338e5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005887324s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-275041 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-064268 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-064268 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.309695309s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-064268 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-064268 --alsologtostderr -v=3
E1213 14:14:43.837147   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/calico-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:14:43.843785   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/calico-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:14:43.855586   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/calico-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:14:43.877345   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/calico-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:14:43.908939   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/custom-flannel-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:14:43.919474   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/calico-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:14:44.001268   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/calico-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:14:44.163542   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/calico-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:14:44.485326   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/calico-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:14:45.127473   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/calico-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:14:46.409003   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/calico-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-064268 --alsologtostderr -v=3: (12.649691474s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-275041 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-275041 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275041 -n old-k8s-version-275041
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275041 -n old-k8s-version-275041: exit status 2 (297.825528ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-275041 -n old-k8s-version-275041
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-275041 -n old-k8s-version-275041: exit status 2 (286.536867ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-275041 --alsologtostderr -v=1
E1213 14:14:48.971436   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/calico-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-275041 --alsologtostderr -v=1: (1.016535229s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275041 -n old-k8s-version-275041
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-275041 -n old-k8s-version-275041
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ntpbd" [71356cbb-a1bb-4877-8c78-9a44f401b013] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ntpbd" [71356cbb-a1bb-4877-8c78-9a44f401b013] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.00740809s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (54.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-994510 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0-beta.0
E1213 14:14:54.093506   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/calico-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:14:54.150453   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/custom-flannel-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-994510 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0-beta.0: (54.191804019s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (54.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-064268 -n default-k8s-diff-port-064268
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-064268 -n default-k8s-diff-port-064268: exit status 7 (74.644466ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-064268 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (113.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-064268 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-064268 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.2: (1m53.24263682s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-064268 -n default-k8s-diff-port-064268
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (113.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ntpbd" [71356cbb-a1bb-4877-8c78-9a44f401b013] Running
E1213 14:15:04.335585   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/calico-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006479348s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-228187 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-228187 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-228187 --alsologtostderr -v=1
E1213 14:15:06.147858   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-228187 --alsologtostderr -v=1: (1.08393704s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-228187 -n embed-certs-228187
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-228187 -n embed-certs-228187: exit status 2 (307.306733ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-228187 -n embed-certs-228187
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-228187 -n embed-certs-228187: exit status 2 (257.501946ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-228187 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-228187 -n embed-certs-228187
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-228187 -n embed-certs-228187
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.61s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-719825 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-719825 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.22s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-719825 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-719825 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-719825 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-719825 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-719825 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.20s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.22s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-719825 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   commit: 89f69959280ebeefd164cfeba1f5b84c6f004bc9
iso_test.go:118:   iso_version: v1.37.0-1765613186-22122
iso_test.go:118:   kicbase_version: v0.0.48-1765275396-22083
iso_test.go:118:   minikube_version: v1.37.0
--- PASS: TestISOImage/VersionJSON (0.22s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.22s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-719825 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
E1213 14:15:14.632676   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/custom-flannel-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/eBPFSupport (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-qgkp8" [997e0ecf-5bd3-4d51-a7f9-c1df0a809905] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-qgkp8" [997e0ecf-5bd3-4d51-a7f9-c1df0a809905] Running
E1213 14:15:22.038023   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/false-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:22.044621   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/false-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:22.056183   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/false-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:22.077797   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/false-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:22.119481   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/false-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:22.201102   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/false-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:22.362795   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/false-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:22.685164   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/false-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:23.075385   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/functional-690060/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:23.327232   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/false-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:24.609728   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/false-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:24.817555   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/calico-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:27.171204   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/false-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.005775407s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-qgkp8" [997e0ecf-5bd3-4d51-a7f9-c1df0a809905] Running
E1213 14:15:32.293695   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/false-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006609635s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-480987 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-480987 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-994510 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-994510 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.054205507s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-994510 --alsologtostderr -v=3
E1213 14:15:48.514856   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/enable-default-cni-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:48.521295   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/enable-default-cni-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:48.532812   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/enable-default-cni-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:48.554399   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/enable-default-cni-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:48.596237   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/enable-default-cni-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:48.677810   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/enable-default-cni-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:48.839990   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/enable-default-cni-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:49.161813   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/enable-default-cni-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:49.804187   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/enable-default-cni-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:15:50.235061   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/kindnet-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-994510 --alsologtostderr -v=3: (13.85604305s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-994510 -n newest-cni-994510
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-994510 -n newest-cni-994510: exit status 7 (67.627082ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-994510 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-994510 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0-beta.0
E1213 14:16:03.018073   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/false-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:16:05.779198   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/calico-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-994510 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.35.0-beta.0: (38.063320738s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-994510 -n newest-cni-994510
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-994510 image list --format=json
E1213 14:16:39.674773   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/bridge-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-994510 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-994510 --alsologtostderr -v=1: (1.225577656s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-994510 -n newest-cni-994510
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-994510 -n newest-cni-994510: exit status 2 (275.445315ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-994510 -n newest-cni-994510
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-994510 -n newest-cni-994510: exit status 2 (275.265202ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-994510 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-994510 --alsologtostderr -v=1: (1.09699324s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-994510 -n newest-cni-994510
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-994510 -n newest-cni-994510
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ncz4k" [5a1275a5-a650-4af4-b464-fa7fd47b1eb3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1213 14:16:55.038634   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/bridge-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:16:57.324836   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/flannel-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ncz4k" [5a1275a5-a650-4af4-b464-fa7fd47b1eb3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.004708103s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ncz4k" [5a1275a5-a650-4af4-b464-fa7fd47b1eb3] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004503097s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-064268 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-064268 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-064268 --alsologtostderr -v=1
E1213 14:17:10.454958   20230 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-16298/.minikube/profiles/enable-default-cni-378767/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-064268 -n default-k8s-diff-port-064268
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-064268 -n default-k8s-diff-port-064268: exit status 2 (238.806322ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-064268 -n default-k8s-diff-port-064268
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-064268 -n default-k8s-diff-port-064268: exit status 2 (239.872282ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-064268 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-064268 -n default-k8s-diff-port-064268
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-064268 -n default-k8s-diff-port-064268
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.92s)

                                                
                                    

Test skip (45/452)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
117 TestFunctional/parallel/PodmanEnv 0
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
128 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
131 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService 0.01
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0.01
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.01
289 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
317 TestKicCustomNetwork 0
318 TestKicExistingNetwork 0
319 TestKicCustomSubnet 0
320 TestKicStaticIP 0
352 TestChangeNoneUser 0
355 TestScheduledStopWindows 0
359 TestInsufficientStorage 0
363 TestMissingContainerUpgrade 0
374 TestNetworkPlugins/group/cilium 5.13
382 TestStartStop/group/disable-driver-mounts 0.24
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-378767 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-378767

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-378767

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-378767

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-378767

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-378767

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-378767

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-378767

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-378767

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-378767

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-378767

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-378767

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-378767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-378767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-378767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-378767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-378767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-378767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-378767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-378767" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-378767

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-378767

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-378767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-378767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-378767

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-378767

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-378767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-378767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-378767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-378767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-378767" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-378767

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-378767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378767"

                                                
                                                
----------------------- debugLogs end: cilium-378767 [took: 4.921548582s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-378767" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-378767
--- SKIP: TestNetworkPlugins/group/cilium (5.13s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-344881" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-344881
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard