Test Report: KVM_Linux_crio 22061

                    
                      1c88f6d23ea396bf85affe6630893acb8f160428:2025-12-10:42722
                    
                

Test fail (3/437)

Order failed test Duration
46 TestAddons/parallel/Ingress 158.39
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 302.06
345 TestPreload 149.05
x
+
TestAddons/parallel/Ingress (158.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-462156 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-462156 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-462156 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [38540fef-532f-483f-9d53-b8ff5b9bcf5b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [38540fef-532f-483f-9d53-b8ff5b9bcf5b] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004641347s
I1210 22:29:30.017333    9065 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-462156 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-462156 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.475841019s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-462156 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-462156 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.39.89
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-462156 -n addons-462156
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-462156 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-462156 logs -n 25: (1.171885373s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-809442                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-809442 │ jenkins │ v1.37.0 │ 10 Dec 25 22:26 UTC │ 10 Dec 25 22:26 UTC │
	│ start   │ --download-only -p binary-mirror-634983 --alsologtostderr --binary-mirror http://127.0.0.1:43689 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-634983 │ jenkins │ v1.37.0 │ 10 Dec 25 22:26 UTC │                     │
	│ delete  │ -p binary-mirror-634983                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-634983 │ jenkins │ v1.37.0 │ 10 Dec 25 22:26 UTC │ 10 Dec 25 22:26 UTC │
	│ addons  │ disable dashboard -p addons-462156                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-462156        │ jenkins │ v1.37.0 │ 10 Dec 25 22:26 UTC │                     │
	│ addons  │ enable dashboard -p addons-462156                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-462156        │ jenkins │ v1.37.0 │ 10 Dec 25 22:26 UTC │                     │
	│ start   │ -p addons-462156 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-462156        │ jenkins │ v1.37.0 │ 10 Dec 25 22:26 UTC │ 10 Dec 25 22:28 UTC │
	│ addons  │ addons-462156 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-462156        │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │ 10 Dec 25 22:28 UTC │
	│ addons  │ addons-462156 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-462156        │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │ 10 Dec 25 22:28 UTC │
	│ addons  │ enable headlamp -p addons-462156 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-462156        │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │ 10 Dec 25 22:28 UTC │
	│ addons  │ addons-462156 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-462156        │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:29 UTC │
	│ addons  │ addons-462156 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-462156        │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:29 UTC │
	│ addons  │ addons-462156 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-462156        │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:29 UTC │
	│ addons  │ addons-462156 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-462156        │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:29 UTC │
	│ ip      │ addons-462156 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-462156        │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:29 UTC │
	│ addons  │ addons-462156 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-462156        │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:29 UTC │
	│ ssh     │ addons-462156 ssh cat /opt/local-path-provisioner/pvc-b4447a5f-b7fa-4088-983a-5d4d2b4a48d3_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-462156        │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:29 UTC │
	│ addons  │ addons-462156 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-462156        │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:30 UTC │
	│ addons  │ addons-462156 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-462156        │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:29 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-462156                                                                                                                                                                                                                                                                                                                                                                                         │ addons-462156        │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:29 UTC │
	│ ssh     │ addons-462156 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-462156        │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │                     │
	│ addons  │ addons-462156 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-462156        │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:29 UTC │
	│ addons  │ addons-462156 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-462156        │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:29 UTC │
	│ addons  │ addons-462156 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-462156        │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:29 UTC │
	│ addons  │ addons-462156 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-462156        │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:30 UTC │
	│ ip      │ addons-462156 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-462156        │ jenkins │ v1.37.0 │ 10 Dec 25 22:31 UTC │ 10 Dec 25 22:31 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 22:26:32
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 22:26:32.169557    9998 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:26:32.169644    9998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:26:32.169651    9998 out.go:374] Setting ErrFile to fd 2...
	I1210 22:26:32.169655    9998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:26:32.169828    9998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
	I1210 22:26:32.170306    9998 out.go:368] Setting JSON to false
	I1210 22:26:32.171074    9998 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":533,"bootTime":1765405059,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 22:26:32.171122    9998 start.go:143] virtualization: kvm guest
	I1210 22:26:32.173038    9998 out.go:179] * [addons-462156] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 22:26:32.174335    9998 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 22:26:32.174327    9998 notify.go:221] Checking for updates...
	I1210 22:26:32.176777    9998 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 22:26:32.177993    9998 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5125/kubeconfig
	I1210 22:26:32.179388    9998 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5125/.minikube
	I1210 22:26:32.180707    9998 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 22:26:32.182073    9998 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 22:26:32.183429    9998 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 22:26:32.212895    9998 out.go:179] * Using the kvm2 driver based on user configuration
	I1210 22:26:32.214276    9998 start.go:309] selected driver: kvm2
	I1210 22:26:32.214290    9998 start.go:927] validating driver "kvm2" against <nil>
	I1210 22:26:32.214308    9998 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 22:26:32.214945    9998 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 22:26:32.215149    9998 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 22:26:32.215184    9998 cni.go:84] Creating CNI manager for ""
	I1210 22:26:32.215223    9998 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 22:26:32.215231    9998 start_flags.go:351] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 22:26:32.215271    9998 start.go:353] cluster config:
	{Name:addons-462156 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-462156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 22:26:32.215369    9998 iso.go:125] acquiring lock: {Name:mk1091e707b59a200dfce77f9e85a41a0a31058c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 22:26:32.216915    9998 out.go:179] * Starting "addons-462156" primary control-plane node in "addons-462156" cluster
	I1210 22:26:32.218022    9998 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 22:26:32.218045    9998 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 22:26:32.218056    9998 cache.go:65] Caching tarball of preloaded images
	I1210 22:26:32.218122    9998 preload.go:238] Found /home/jenkins/minikube-integration/22061-5125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 22:26:32.218132    9998 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 22:26:32.218421    9998 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/config.json ...
	I1210 22:26:32.218449    9998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/config.json: {Name:mka7649c59aae252a336cdc3b3bcfac74b8f5b1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:32.218565    9998 start.go:360] acquireMachinesLock for addons-462156: {Name:mkee27f251311e7c2b20a9d6393fa289a9410b32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 22:26:32.218604    9998 start.go:364] duration metric: took 28.357µs to acquireMachinesLock for "addons-462156"
	I1210 22:26:32.218621    9998 start.go:93] Provisioning new machine with config: &{Name:addons-462156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:
22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-462156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 22:26:32.218669    9998 start.go:125] createHost starting for "" (driver="kvm2")
	I1210 22:26:32.220079    9998 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1210 22:26:32.220209    9998 start.go:159] libmachine.API.Create for "addons-462156" (driver="kvm2")
	I1210 22:26:32.220233    9998 client.go:173] LocalClient.Create starting
	I1210 22:26:32.220298    9998 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca.pem
	I1210 22:26:32.250694    9998 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/cert.pem
	I1210 22:26:32.278720    9998 main.go:143] libmachine: creating domain...
	I1210 22:26:32.278739    9998 main.go:143] libmachine: creating network...
	I1210 22:26:32.280083    9998 main.go:143] libmachine: found existing default network
	I1210 22:26:32.280392    9998 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1210 22:26:32.280981    9998 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b4d360}
	I1210 22:26:32.281074    9998 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-462156</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1210 22:26:32.287137    9998 main.go:143] libmachine: creating private network mk-addons-462156 192.168.39.0/24...
	I1210 22:26:32.350851    9998 main.go:143] libmachine: private network mk-addons-462156 192.168.39.0/24 created
	I1210 22:26:32.351114    9998 main.go:143] libmachine: <network>
	  <name>mk-addons-462156</name>
	  <uuid>4e33da69-9275-4eca-b612-86f4ce6cac3e</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:56:9a:40'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1210 22:26:32.351141    9998 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156 ...
	I1210 22:26:32.351165    9998 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22061-5125/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
	I1210 22:26:32.351180    9998 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22061-5125/.minikube
	I1210 22:26:32.351257    9998 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22061-5125/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22061-5125/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso...
	I1210 22:26:32.620660    9998 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa...
	I1210 22:26:32.660147    9998 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/addons-462156.rawdisk...
	I1210 22:26:32.660184    9998 main.go:143] libmachine: Writing magic tar header
	I1210 22:26:32.660208    9998 main.go:143] libmachine: Writing SSH key tar header
	I1210 22:26:32.660276    9998 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156 ...
	I1210 22:26:32.660335    9998 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156
	I1210 22:26:32.660379    9998 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156 (perms=drwx------)
	I1210 22:26:32.660397    9998 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22061-5125/.minikube/machines
	I1210 22:26:32.660406    9998 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22061-5125/.minikube/machines (perms=drwxr-xr-x)
	I1210 22:26:32.660417    9998 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22061-5125/.minikube
	I1210 22:26:32.660426    9998 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22061-5125/.minikube (perms=drwxr-xr-x)
	I1210 22:26:32.660434    9998 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22061-5125
	I1210 22:26:32.660473    9998 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22061-5125 (perms=drwxrwxr-x)
	I1210 22:26:32.660483    9998 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1210 22:26:32.660493    9998 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 22:26:32.660500    9998 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1210 22:26:32.660507    9998 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 22:26:32.660516    9998 main.go:143] libmachine: checking permissions on dir: /home
	I1210 22:26:32.660525    9998 main.go:143] libmachine: skipping /home - not owner
	I1210 22:26:32.660530    9998 main.go:143] libmachine: defining domain...
	I1210 22:26:32.661866    9998 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-462156</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/addons-462156.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-462156'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1210 22:26:32.669613    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:9c:8e:25 in network default
	I1210 22:26:32.670162    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:32.670181    9998 main.go:143] libmachine: starting domain...
	I1210 22:26:32.670186    9998 main.go:143] libmachine: ensuring networks are active...
	I1210 22:26:32.670937    9998 main.go:143] libmachine: Ensuring network default is active
	I1210 22:26:32.671307    9998 main.go:143] libmachine: Ensuring network mk-addons-462156 is active
	I1210 22:26:32.672041    9998 main.go:143] libmachine: getting domain XML...
	I1210 22:26:32.673101    9998 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-462156</name>
	  <uuid>04673162-af0d-46ce-874c-a95dda098d35</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/addons-462156.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:8c:7a:8f'/>
	      <source network='mk-addons-462156'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:9c:8e:25'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1210 22:26:33.972515    9998 main.go:143] libmachine: waiting for domain to start...
	I1210 22:26:33.973933    9998 main.go:143] libmachine: domain is now running
	I1210 22:26:33.973956    9998 main.go:143] libmachine: waiting for IP...
	I1210 22:26:33.974688    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:33.975154    9998 main.go:143] libmachine: no network interface addresses found for domain addons-462156 (source=lease)
	I1210 22:26:33.975171    9998 main.go:143] libmachine: trying to list again with source=arp
	I1210 22:26:33.975447    9998 main.go:143] libmachine: unable to find current IP address of domain addons-462156 in network mk-addons-462156 (interfaces detected: [])
	I1210 22:26:33.975490    9998 retry.go:31] will retry after 210.316166ms: waiting for domain to come up
	I1210 22:26:34.187199    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:34.187840    9998 main.go:143] libmachine: no network interface addresses found for domain addons-462156 (source=lease)
	I1210 22:26:34.187862    9998 main.go:143] libmachine: trying to list again with source=arp
	I1210 22:26:34.188157    9998 main.go:143] libmachine: unable to find current IP address of domain addons-462156 in network mk-addons-462156 (interfaces detected: [])
	I1210 22:26:34.188204    9998 retry.go:31] will retry after 289.237581ms: waiting for domain to come up
	I1210 22:26:34.478636    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:34.479125    9998 main.go:143] libmachine: no network interface addresses found for domain addons-462156 (source=lease)
	I1210 22:26:34.479141    9998 main.go:143] libmachine: trying to list again with source=arp
	I1210 22:26:34.479469    9998 main.go:143] libmachine: unable to find current IP address of domain addons-462156 in network mk-addons-462156 (interfaces detected: [])
	I1210 22:26:34.479508    9998 retry.go:31] will retry after 470.255734ms: waiting for domain to come up
	I1210 22:26:34.950941    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:34.951449    9998 main.go:143] libmachine: no network interface addresses found for domain addons-462156 (source=lease)
	I1210 22:26:34.951462    9998 main.go:143] libmachine: trying to list again with source=arp
	I1210 22:26:34.951729    9998 main.go:143] libmachine: unable to find current IP address of domain addons-462156 in network mk-addons-462156 (interfaces detected: [])
	I1210 22:26:34.951755    9998 retry.go:31] will retry after 467.929401ms: waiting for domain to come up
	I1210 22:26:35.421550    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:35.422196    9998 main.go:143] libmachine: no network interface addresses found for domain addons-462156 (source=lease)
	I1210 22:26:35.422217    9998 main.go:143] libmachine: trying to list again with source=arp
	I1210 22:26:35.422566    9998 main.go:143] libmachine: unable to find current IP address of domain addons-462156 in network mk-addons-462156 (interfaces detected: [])
	I1210 22:26:35.422607    9998 retry.go:31] will retry after 534.97958ms: waiting for domain to come up
	I1210 22:26:35.959333    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:35.959812    9998 main.go:143] libmachine: no network interface addresses found for domain addons-462156 (source=lease)
	I1210 22:26:35.959826    9998 main.go:143] libmachine: trying to list again with source=arp
	I1210 22:26:35.960059    9998 main.go:143] libmachine: unable to find current IP address of domain addons-462156 in network mk-addons-462156 (interfaces detected: [])
	I1210 22:26:35.960084    9998 retry.go:31] will retry after 624.235412ms: waiting for domain to come up
	I1210 22:26:36.585972    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:36.586381    9998 main.go:143] libmachine: no network interface addresses found for domain addons-462156 (source=lease)
	I1210 22:26:36.586408    9998 main.go:143] libmachine: trying to list again with source=arp
	I1210 22:26:36.586719    9998 main.go:143] libmachine: unable to find current IP address of domain addons-462156 in network mk-addons-462156 (interfaces detected: [])
	I1210 22:26:36.586752    9998 retry.go:31] will retry after 1.055332171s: waiting for domain to come up
	I1210 22:26:37.643581    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:37.644206    9998 main.go:143] libmachine: no network interface addresses found for domain addons-462156 (source=lease)
	I1210 22:26:37.644224    9998 main.go:143] libmachine: trying to list again with source=arp
	I1210 22:26:37.644496    9998 main.go:143] libmachine: unable to find current IP address of domain addons-462156 in network mk-addons-462156 (interfaces detected: [])
	I1210 22:26:37.644532    9998 retry.go:31] will retry after 1.103273366s: waiting for domain to come up
	I1210 22:26:38.749677    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:38.750109    9998 main.go:143] libmachine: no network interface addresses found for domain addons-462156 (source=lease)
	I1210 22:26:38.750124    9998 main.go:143] libmachine: trying to list again with source=arp
	I1210 22:26:38.750368    9998 main.go:143] libmachine: unable to find current IP address of domain addons-462156 in network mk-addons-462156 (interfaces detected: [])
	I1210 22:26:38.750395    9998 retry.go:31] will retry after 1.832613895s: waiting for domain to come up
	I1210 22:26:40.585524    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:40.586170    9998 main.go:143] libmachine: no network interface addresses found for domain addons-462156 (source=lease)
	I1210 22:26:40.586189    9998 main.go:143] libmachine: trying to list again with source=arp
	I1210 22:26:40.586510    9998 main.go:143] libmachine: unable to find current IP address of domain addons-462156 in network mk-addons-462156 (interfaces detected: [])
	I1210 22:26:40.586547    9998 retry.go:31] will retry after 1.876007042s: waiting for domain to come up
	I1210 22:26:42.464650    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:42.465175    9998 main.go:143] libmachine: no network interface addresses found for domain addons-462156 (source=lease)
	I1210 22:26:42.465189    9998 main.go:143] libmachine: trying to list again with source=arp
	I1210 22:26:42.465447    9998 main.go:143] libmachine: unable to find current IP address of domain addons-462156 in network mk-addons-462156 (interfaces detected: [])
	I1210 22:26:42.465478    9998 retry.go:31] will retry after 2.588292567s: waiting for domain to come up
	I1210 22:26:45.057261    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:45.057821    9998 main.go:143] libmachine: no network interface addresses found for domain addons-462156 (source=lease)
	I1210 22:26:45.057838    9998 main.go:143] libmachine: trying to list again with source=arp
	I1210 22:26:45.058140    9998 main.go:143] libmachine: unable to find current IP address of domain addons-462156 in network mk-addons-462156 (interfaces detected: [])
	I1210 22:26:45.058179    9998 retry.go:31] will retry after 2.592577244s: waiting for domain to come up
	I1210 22:26:47.652009    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:47.652698    9998 main.go:143] libmachine: domain addons-462156 has current primary IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:47.652716    9998 main.go:143] libmachine: found domain IP: 192.168.39.89
	I1210 22:26:47.652726    9998 main.go:143] libmachine: reserving static IP address...
	I1210 22:26:47.653236    9998 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-462156", mac: "52:54:00:8c:7a:8f", ip: "192.168.39.89"} in network mk-addons-462156
	I1210 22:26:47.827736    9998 main.go:143] libmachine: reserved static IP address 192.168.39.89 for domain addons-462156
	I1210 22:26:47.827761    9998 main.go:143] libmachine: waiting for SSH...
	I1210 22:26:47.827769    9998 main.go:143] libmachine: Getting to WaitForSSH function...
	I1210 22:26:47.830361    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:47.830899    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:26:47.830924    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:47.831132    9998 main.go:143] libmachine: Using SSH client type: native
	I1210 22:26:47.831392    9998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1210 22:26:47.831404    9998 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1210 22:26:47.947064    9998 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 22:26:47.947429    9998 main.go:143] libmachine: domain creation complete
	I1210 22:26:47.949019    9998 machine.go:94] provisionDockerMachine start ...
	I1210 22:26:47.951191    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:47.951607    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:26:47.951636    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:47.951791    9998 main.go:143] libmachine: Using SSH client type: native
	I1210 22:26:47.952008    9998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1210 22:26:47.952021    9998 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 22:26:48.063022    9998 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 22:26:48.063053    9998 buildroot.go:166] provisioning hostname "addons-462156"
	I1210 22:26:48.065786    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:48.066101    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:26:48.066149    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:48.066364    9998 main.go:143] libmachine: Using SSH client type: native
	I1210 22:26:48.066580    9998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1210 22:26:48.066592    9998 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-462156 && echo "addons-462156" | sudo tee /etc/hostname
	I1210 22:26:48.195095    9998 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-462156
	
	I1210 22:26:48.197631    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:48.198177    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:26:48.198203    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:48.198352    9998 main.go:143] libmachine: Using SSH client type: native
	I1210 22:26:48.198553    9998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1210 22:26:48.198586    9998 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-462156' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-462156/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-462156' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 22:26:48.334882    9998 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 22:26:48.334908    9998 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22061-5125/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-5125/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-5125/.minikube}
	I1210 22:26:48.334924    9998 buildroot.go:174] setting up certificates
	I1210 22:26:48.334936    9998 provision.go:84] configureAuth start
	I1210 22:26:48.337577    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:48.337943    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:26:48.337972    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:48.340138    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:48.340472    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:26:48.340494    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:48.340628    9998 provision.go:143] copyHostCerts
	I1210 22:26:48.340704    9998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-5125/.minikube/ca.pem (1078 bytes)
	I1210 22:26:48.340848    9998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-5125/.minikube/cert.pem (1123 bytes)
	I1210 22:26:48.341001    9998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-5125/.minikube/key.pem (1675 bytes)
	I1210 22:26:48.341099    9998 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-5125/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca-key.pem org=jenkins.addons-462156 san=[127.0.0.1 192.168.39.89 addons-462156 localhost minikube]
	I1210 22:26:48.404755    9998 provision.go:177] copyRemoteCerts
	I1210 22:26:48.404810    9998 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 22:26:48.407209    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:48.407618    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:26:48.407640    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:48.407835    9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
	I1210 22:26:48.495876    9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 22:26:48.524015    9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 22:26:48.552663    9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 22:26:48.580287    9998 provision.go:87] duration metric: took 245.338206ms to configureAuth
	I1210 22:26:48.580316    9998 buildroot.go:189] setting minikube options for container-runtime
	I1210 22:26:48.580524    9998 config.go:182] Loaded profile config "addons-462156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:26:48.583299    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:48.583702    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:26:48.583733    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:48.583902    9998 main.go:143] libmachine: Using SSH client type: native
	I1210 22:26:48.584124    9998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1210 22:26:48.584144    9998 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 22:26:48.839178    9998 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 22:26:48.839232    9998 machine.go:97] duration metric: took 890.169875ms to provisionDockerMachine
	I1210 22:26:48.839259    9998 client.go:176] duration metric: took 16.619015839s to LocalClient.Create
	I1210 22:26:48.839284    9998 start.go:167] duration metric: took 16.619073728s to libmachine.API.Create "addons-462156"
	I1210 22:26:48.839298    9998 start.go:293] postStartSetup for "addons-462156" (driver="kvm2")
	I1210 22:26:48.839310    9998 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 22:26:48.839379    9998 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 22:26:48.842291    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:48.842861    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:26:48.842890    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:48.843052    9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
	I1210 22:26:48.930251    9998 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 22:26:48.935271    9998 info.go:137] Remote host: Buildroot 2025.02
	I1210 22:26:48.935303    9998 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5125/.minikube/addons for local assets ...
	I1210 22:26:48.935380    9998 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5125/.minikube/files for local assets ...
	I1210 22:26:48.935407    9998 start.go:296] duration metric: took 96.102593ms for postStartSetup
	I1210 22:26:48.938720    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:48.939164    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:26:48.939199    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:48.939477    9998 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/config.json ...
	I1210 22:26:48.939681    9998 start.go:128] duration metric: took 16.721000925s to createHost
	I1210 22:26:48.942167    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:48.942566    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:26:48.942588    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:48.942720    9998 main.go:143] libmachine: Using SSH client type: native
	I1210 22:26:48.942905    9998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1210 22:26:48.942914    9998 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1210 22:26:49.054480    9998 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765405609.010460701
	
	I1210 22:26:49.054510    9998 fix.go:216] guest clock: 1765405609.010460701
	I1210 22:26:49.054536    9998 fix.go:229] Guest: 2025-12-10 22:26:49.010460701 +0000 UTC Remote: 2025-12-10 22:26:48.939693781 +0000 UTC m=+16.815037594 (delta=70.76692ms)
	I1210 22:26:49.054554    9998 fix.go:200] guest clock delta is within tolerance: 70.76692ms
	I1210 22:26:49.054558    9998 start.go:83] releasing machines lock for "addons-462156", held for 16.835944852s
	I1210 22:26:49.057406    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:49.057816    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:26:49.057842    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:49.058352    9998 ssh_runner.go:195] Run: cat /version.json
	I1210 22:26:49.058473    9998 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 22:26:49.061562    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:49.061946    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:26:49.061968    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:49.062014    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:49.062124    9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
	I1210 22:26:49.062569    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:26:49.062607    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:49.062777    9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
	I1210 22:26:49.143063    9998 ssh_runner.go:195] Run: systemctl --version
	I1210 22:26:49.179976    9998 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 22:26:49.339734    9998 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 22:26:49.347605    9998 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 22:26:49.347664    9998 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 22:26:49.368076    9998 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 22:26:49.368102    9998 start.go:496] detecting cgroup driver to use...
	I1210 22:26:49.368159    9998 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 22:26:49.392091    9998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 22:26:49.412400    9998 docker.go:218] disabling cri-docker service (if available) ...
	I1210 22:26:49.412475    9998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 22:26:49.430362    9998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 22:26:49.446615    9998 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 22:26:49.589065    9998 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 22:26:49.804627    9998 docker.go:234] disabling docker service ...
	I1210 22:26:49.804687    9998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 22:26:49.821191    9998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 22:26:49.836216    9998 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 22:26:49.991961    9998 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 22:26:50.134399    9998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 22:26:50.150200    9998 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 22:26:50.175284    9998 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 22:26:50.175368    9998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 22:26:50.188693    9998 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 22:26:50.188756    9998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 22:26:50.201474    9998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 22:26:50.214476    9998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 22:26:50.227100    9998 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 22:26:50.240186    9998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 22:26:50.252323    9998 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 22:26:50.274866    9998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 22:26:50.287289    9998 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 22:26:50.299059    9998 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 22:26:50.299116    9998 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 22:26:50.320730    9998 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 22:26:50.333897    9998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 22:26:50.472201    9998 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 22:26:50.582759    9998 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 22:26:50.582873    9998 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 22:26:50.588012    9998 start.go:564] Will wait 60s for crictl version
	I1210 22:26:50.588081    9998 ssh_runner.go:195] Run: which crictl
	I1210 22:26:50.592091    9998 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 22:26:50.627114    9998 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 22:26:50.627262    9998 ssh_runner.go:195] Run: crio --version
	I1210 22:26:50.655008    9998 ssh_runner.go:195] Run: crio --version
	I1210 22:26:50.686270    9998 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1210 22:26:50.689706    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:50.690065    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:26:50.690089    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:26:50.690254    9998 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 22:26:50.694646    9998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 22:26:50.708902    9998 kubeadm.go:884] updating cluster {Name:addons-462156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.34.2 ClusterName:addons-462156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 22:26:50.709011    9998 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 22:26:50.709058    9998 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 22:26:50.736281    9998 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1210 22:26:50.736344    9998 ssh_runner.go:195] Run: which lz4
	I1210 22:26:50.740585    9998 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 22:26:50.745107    9998 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 22:26:50.745135    9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1210 22:26:51.855223    9998 crio.go:462] duration metric: took 1.114670573s to copy over tarball
	I1210 22:26:51.855292    9998 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 22:26:53.372246    9998 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.516920498s)
	I1210 22:26:53.372269    9998 crio.go:469] duration metric: took 1.517018571s to extract the tarball
	I1210 22:26:53.372279    9998 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 22:26:53.407828    9998 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 22:26:53.449051    9998 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 22:26:53.449072    9998 cache_images.go:86] Images are preloaded, skipping loading
	I1210 22:26:53.449078    9998 kubeadm.go:935] updating node { 192.168.39.89  8443 v1.34.2 crio true true} ...
	I1210 22:26:53.449154    9998 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-462156 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-462156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 22:26:53.449260    9998 ssh_runner.go:195] Run: crio config
	I1210 22:26:53.495717    9998 cni.go:84] Creating CNI manager for ""
	I1210 22:26:53.495778    9998 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 22:26:53.495815    9998 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 22:26:53.495873    9998 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.89 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-462156 NodeName:addons-462156 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 22:26:53.496175    9998 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-462156"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.89"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.89"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 22:26:53.496276    9998 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 22:26:53.509465    9998 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 22:26:53.509535    9998 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 22:26:53.520969    9998 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1210 22:26:53.541077    9998 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 22:26:53.560681    9998 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1210 22:26:53.581579    9998 ssh_runner.go:195] Run: grep 192.168.39.89	control-plane.minikube.internal$ /etc/hosts
	I1210 22:26:53.585612    9998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 22:26:53.599863    9998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 22:26:53.747927    9998 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 22:26:53.781143    9998 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156 for IP: 192.168.39.89
	I1210 22:26:53.781169    9998 certs.go:195] generating shared ca certs ...
	I1210 22:26:53.781185    9998 certs.go:227] acquiring lock for ca certs: {Name:mkea05d5a03ad9931f0e4f58a8f8d8a307addad1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:53.781314    9998 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-5125/.minikube/ca.key
	I1210 22:26:53.854417    9998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5125/.minikube/ca.crt ...
	I1210 22:26:53.854451    9998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/.minikube/ca.crt: {Name:mka2b739e386ec9988f2978e08f700a007b1aaa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:53.854620    9998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5125/.minikube/ca.key ...
	I1210 22:26:53.854631    9998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/.minikube/ca.key: {Name:mk96567aa363f44c5e4bb3d596fdd02a58c35fbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:53.854717    9998 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-5125/.minikube/proxy-client-ca.key
	I1210 22:26:53.891801    9998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5125/.minikube/proxy-client-ca.crt ...
	I1210 22:26:53.891822    9998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/.minikube/proxy-client-ca.crt: {Name:mk264fdf7005b89cf2b12bffa5bd551cd8f9b8c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:53.891969    9998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5125/.minikube/proxy-client-ca.key ...
	I1210 22:26:53.891981    9998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/.minikube/proxy-client-ca.key: {Name:mk55cd9d85b85d4fa27aa5825b03156606fb26fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:53.892047    9998 certs.go:257] generating profile certs ...
	I1210 22:26:53.892099    9998 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.key
	I1210 22:26:53.892121    9998 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt with IP's: []
	I1210 22:26:54.077166    9998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt ...
	I1210 22:26:54.077193    9998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: {Name:mkccad67cc705bb7c6228d7393e2d18a87f92ed1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:54.077360    9998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.key ...
	I1210 22:26:54.077372    9998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.key: {Name:mk221bcde58651631aa74395b3ed7c76a192e171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:54.077951    9998 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/apiserver.key.90118dbb
	I1210 22:26:54.077974    9998 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/apiserver.crt.90118dbb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.89]
	I1210 22:26:54.207953    9998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/apiserver.crt.90118dbb ...
	I1210 22:26:54.207979    9998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/apiserver.crt.90118dbb: {Name:mkf9e4d36e9bfce0ff658089c69123e4aee1e819 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:54.208126    9998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/apiserver.key.90118dbb ...
	I1210 22:26:54.208140    9998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/apiserver.key.90118dbb: {Name:mk0c5a74b707644ae88eeb264b79701a440d00cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:54.208207    9998 certs.go:382] copying /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/apiserver.crt.90118dbb -> /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/apiserver.crt
	I1210 22:26:54.208274    9998 certs.go:386] copying /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/apiserver.key.90118dbb -> /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/apiserver.key
	I1210 22:26:54.208316    9998 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/proxy-client.key
	I1210 22:26:54.208334    9998 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/proxy-client.crt with IP's: []
	I1210 22:26:54.448685    9998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/proxy-client.crt ...
	I1210 22:26:54.448713    9998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/proxy-client.crt: {Name:mk1c5213d9313745a64deed022c18a32542a6972 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:54.448884    9998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/proxy-client.key ...
	I1210 22:26:54.448897    9998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/proxy-client.key: {Name:mk2e78c1fe91d458f8a06a11151c39f823e990b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:54.449056    9998 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 22:26:54.449092    9998 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca.pem (1078 bytes)
	I1210 22:26:54.449118    9998 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/cert.pem (1123 bytes)
	I1210 22:26:54.449140    9998 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/key.pem (1675 bytes)
	I1210 22:26:54.449695    9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 22:26:54.488985    9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 22:26:54.529665    9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 22:26:54.558981    9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 22:26:54.587754    9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 22:26:54.616075    9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 22:26:54.644579    9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 22:26:54.673227    9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 22:26:54.702403    9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 22:26:54.732947    9998 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 22:26:54.752250    9998 ssh_runner.go:195] Run: openssl version
	I1210 22:26:54.758248    9998 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 22:26:54.769416    9998 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 22:26:54.780660    9998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 22:26:54.785641    9998 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 22:26 /usr/share/ca-certificates/minikubeCA.pem
	I1210 22:26:54.785691    9998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 22:26:54.792734    9998 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 22:26:54.803510    9998 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 22:26:54.814404    9998 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 22:26:54.818937    9998 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 22:26:54.818991    9998 kubeadm.go:401] StartCluster: {Name:addons-462156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.34.2 ClusterName:addons-462156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 22:26:54.819059    9998 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 22:26:54.819132    9998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 22:26:54.851100    9998 cri.go:89] found id: ""
	I1210 22:26:54.851172    9998 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 22:26:54.863054    9998 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 22:26:54.874575    9998 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 22:26:54.885891    9998 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 22:26:54.885909    9998 kubeadm.go:158] found existing configuration files:
	
	I1210 22:26:54.885961    9998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 22:26:54.896727    9998 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 22:26:54.896792    9998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 22:26:54.908147    9998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 22:26:54.918671    9998 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 22:26:54.918739    9998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 22:26:54.930112    9998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 22:26:54.940879    9998 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 22:26:54.940934    9998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 22:26:54.952251    9998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 22:26:54.963133    9998 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 22:26:54.963194    9998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 22:26:54.974336    9998 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 22:26:55.022873    9998 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1210 22:26:55.022930    9998 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 22:26:55.115925    9998 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 22:26:55.116102    9998 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 22:26:55.116240    9998 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 22:26:55.127231    9998 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 22:26:55.129521    9998 out.go:252]   - Generating certificates and keys ...
	I1210 22:26:55.129609    9998 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 22:26:55.129701    9998 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 22:26:55.390215    9998 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 22:26:56.031678    9998 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 22:26:56.137282    9998 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 22:26:56.517946    9998 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 22:26:57.004227    9998 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 22:26:57.004464    9998 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-462156 localhost] and IPs [192.168.39.89 127.0.0.1 ::1]
	I1210 22:26:57.182564    9998 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 22:26:57.182743    9998 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-462156 localhost] and IPs [192.168.39.89 127.0.0.1 ::1]
	I1210 22:26:57.500819    9998 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 22:26:57.716287    9998 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 22:26:57.793897    9998 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 22:26:57.793965    9998 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 22:26:57.841213    9998 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 22:26:57.979702    9998 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 22:26:58.059939    9998 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 22:26:58.197526    9998 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 22:26:58.379052    9998 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 22:26:58.379333    9998 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 22:26:58.381801    9998 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 22:26:58.387611    9998 out.go:252]   - Booting up control plane ...
	I1210 22:26:58.387732    9998 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 22:26:58.387833    9998 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 22:26:58.387916    9998 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 22:26:58.404591    9998 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 22:26:58.404706    9998 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 22:26:58.411574    9998 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 22:26:58.411715    9998 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 22:26:58.411776    9998 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 22:26:58.558107    9998 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 22:26:58.558240    9998 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 22:27:00.057561    9998 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501546324s
	I1210 22:27:00.060002    9998 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 22:27:00.060122    9998 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.89:8443/livez
	I1210 22:27:00.060215    9998 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 22:27:00.060285    9998 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 22:27:02.064112    9998 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.006055079s
	I1210 22:27:03.323612    9998 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.267113624s
	I1210 22:27:05.561073    9998 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.504855888s
	I1210 22:27:05.581240    9998 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 22:27:05.598086    9998 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 22:27:05.613522    9998 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 22:27:05.613771    9998 kubeadm.go:319] [mark-control-plane] Marking the node addons-462156 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 22:27:05.627696    9998 kubeadm.go:319] [bootstrap-token] Using token: 0h1f2a.ay6wbb2g4r1dwsjt
	I1210 22:27:05.629148    9998 out.go:252]   - Configuring RBAC rules ...
	I1210 22:27:05.629264    9998 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 22:27:05.635704    9998 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 22:27:05.648556    9998 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 22:27:05.658081    9998 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 22:27:05.662264    9998 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 22:27:05.666680    9998 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 22:27:05.966472    9998 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 22:27:06.411935    9998 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 22:27:06.965878    9998 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 22:27:06.967563    9998 kubeadm.go:319] 
	I1210 22:27:06.967634    9998 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 22:27:06.967660    9998 kubeadm.go:319] 
	I1210 22:27:06.967738    9998 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 22:27:06.967746    9998 kubeadm.go:319] 
	I1210 22:27:06.967768    9998 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 22:27:06.967823    9998 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 22:27:06.967902    9998 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 22:27:06.967915    9998 kubeadm.go:319] 
	I1210 22:27:06.967978    9998 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 22:27:06.967986    9998 kubeadm.go:319] 
	I1210 22:27:06.968034    9998 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 22:27:06.968043    9998 kubeadm.go:319] 
	I1210 22:27:06.968139    9998 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 22:27:06.968250    9998 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 22:27:06.968345    9998 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 22:27:06.968355    9998 kubeadm.go:319] 
	I1210 22:27:06.968494    9998 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 22:27:06.968605    9998 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 22:27:06.968615    9998 kubeadm.go:319] 
	I1210 22:27:06.968749    9998 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0h1f2a.ay6wbb2g4r1dwsjt \
	I1210 22:27:06.968900    9998 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fd318d48817654ae7d58380c81fceba02f616127cf15d0ed84bb8d49ffe71ffb \
	I1210 22:27:06.968942    9998 kubeadm.go:319] 	--control-plane 
	I1210 22:27:06.968952    9998 kubeadm.go:319] 
	I1210 22:27:06.969094    9998 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 22:27:06.969112    9998 kubeadm.go:319] 
	I1210 22:27:06.969225    9998 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0h1f2a.ay6wbb2g4r1dwsjt \
	I1210 22:27:06.969376    9998 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fd318d48817654ae7d58380c81fceba02f616127cf15d0ed84bb8d49ffe71ffb 
	I1210 22:27:06.969661    9998 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 22:27:06.969685    9998 cni.go:84] Creating CNI manager for ""
	I1210 22:27:06.969697    9998 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 22:27:06.971833    9998 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 22:27:06.972967    9998 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 22:27:06.985553    9998 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 22:27:07.009699    9998 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 22:27:07.009765    9998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:27:07.009868    9998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-462156 minikube.k8s.io/updated_at=2025_12_10T22_27_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6 minikube.k8s.io/name=addons-462156 minikube.k8s.io/primary=true
	I1210 22:27:07.155580    9998 ops.go:34] apiserver oom_adj: -16
	I1210 22:27:07.155642    9998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:27:07.656067    9998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:27:08.155995    9998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:27:08.655769    9998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:27:09.155882    9998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:27:09.655794    9998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:27:10.156090    9998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:27:10.656720    9998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:27:11.155700    9998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 22:27:11.239783    9998 kubeadm.go:1114] duration metric: took 4.230082645s to wait for elevateKubeSystemPrivileges
	I1210 22:27:11.239817    9998 kubeadm.go:403] duration metric: took 16.420832459s to StartCluster
	I1210 22:27:11.239834    9998 settings.go:142] acquiring lock: {Name:mkb6311113a1595706e930e5ec066489475d2931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:27:11.239972    9998 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5125/kubeconfig
	I1210 22:27:11.240347    9998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/kubeconfig: {Name:mkc997741ee5522db4814beb6df9db1a27fdfa83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:27:11.240609    9998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 22:27:11.240640    9998 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.89 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 22:27:11.240689    9998 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1210 22:27:11.240809    9998 addons.go:70] Setting yakd=true in profile "addons-462156"
	I1210 22:27:11.240831    9998 addons.go:239] Setting addon yakd=true in "addons-462156"
	I1210 22:27:11.240854    9998 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-462156"
	I1210 22:27:11.240871    9998 addons.go:70] Setting default-storageclass=true in profile "addons-462156"
	I1210 22:27:11.240880    9998 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-462156"
	I1210 22:27:11.240890    9998 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-462156"
	I1210 22:27:11.240894    9998 addons.go:70] Setting ingress-dns=true in profile "addons-462156"
	I1210 22:27:11.240898    9998 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-462156"
	I1210 22:27:11.240906    9998 addons.go:239] Setting addon ingress-dns=true in "addons-462156"
	I1210 22:27:11.240921    9998 config.go:182] Loaded profile config "addons-462156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:27:11.240941    9998 host.go:66] Checking if "addons-462156" exists ...
	I1210 22:27:11.240950    9998 host.go:66] Checking if "addons-462156" exists ...
	I1210 22:27:11.240971    9998 addons.go:70] Setting storage-provisioner=true in profile "addons-462156"
	I1210 22:27:11.240986    9998 addons.go:239] Setting addon storage-provisioner=true in "addons-462156"
	I1210 22:27:11.240995    9998 addons.go:70] Setting gcp-auth=true in profile "addons-462156"
	I1210 22:27:11.241015    9998 addons.go:70] Setting registry=true in profile "addons-462156"
	I1210 22:27:11.241025    9998 addons.go:239] Setting addon registry=true in "addons-462156"
	I1210 22:27:11.241027    9998 mustload.go:66] Loading cluster: addons-462156
	I1210 22:27:11.241041    9998 host.go:66] Checking if "addons-462156" exists ...
	I1210 22:27:11.241066    9998 addons.go:70] Setting inspektor-gadget=true in profile "addons-462156"
	I1210 22:27:11.241092    9998 addons.go:239] Setting addon inspektor-gadget=true in "addons-462156"
	I1210 22:27:11.241120    9998 host.go:66] Checking if "addons-462156" exists ...
	I1210 22:27:11.241196    9998 config.go:182] Loaded profile config "addons-462156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:27:11.241405    9998 addons.go:70] Setting registry-creds=true in profile "addons-462156"
	I1210 22:27:11.241420    9998 addons.go:239] Setting addon registry-creds=true in "addons-462156"
	I1210 22:27:11.241457    9998 host.go:66] Checking if "addons-462156" exists ...
	I1210 22:27:11.241968    9998 addons.go:70] Setting metrics-server=true in profile "addons-462156"
	I1210 22:27:11.242042    9998 addons.go:239] Setting addon metrics-server=true in "addons-462156"
	I1210 22:27:11.242104    9998 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-462156"
	I1210 22:27:11.242144    9998 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-462156"
	I1210 22:27:11.242223    9998 addons.go:70] Setting volcano=true in profile "addons-462156"
	I1210 22:27:11.242294    9998 addons.go:239] Setting addon volcano=true in "addons-462156"
	I1210 22:27:11.242339    9998 host.go:66] Checking if "addons-462156" exists ...
	I1210 22:27:11.242423    9998 addons.go:70] Setting ingress=true in profile "addons-462156"
	I1210 22:27:11.240861    9998 host.go:66] Checking if "addons-462156" exists ...
	I1210 22:27:11.242463    9998 addons.go:239] Setting addon ingress=true in "addons-462156"
	I1210 22:27:11.242499    9998 host.go:66] Checking if "addons-462156" exists ...
	I1210 22:27:11.242617    9998 addons.go:70] Setting volumesnapshots=true in profile "addons-462156"
	I1210 22:27:11.242636    9998 addons.go:239] Setting addon volumesnapshots=true in "addons-462156"
	I1210 22:27:11.242658    9998 host.go:66] Checking if "addons-462156" exists ...
	I1210 22:27:11.242907    9998 out.go:179] * Verifying Kubernetes components...
	I1210 22:27:11.241006    9998 host.go:66] Checking if "addons-462156" exists ...
	I1210 22:27:11.240811    9998 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-462156"
	I1210 22:27:11.243181    9998 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-462156"
	I1210 22:27:11.243215    9998 host.go:66] Checking if "addons-462156" exists ...
	I1210 22:27:11.242109    9998 host.go:66] Checking if "addons-462156" exists ...
	I1210 22:27:11.240883    9998 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-462156"
	I1210 22:27:11.243388    9998 host.go:66] Checking if "addons-462156" exists ...
	I1210 22:27:11.240853    9998 addons.go:70] Setting cloud-spanner=true in profile "addons-462156"
	I1210 22:27:11.243765    9998 addons.go:239] Setting addon cloud-spanner=true in "addons-462156"
	I1210 22:27:11.243786    9998 host.go:66] Checking if "addons-462156" exists ...
	I1210 22:27:11.244489    9998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 22:27:11.247678    9998 host.go:66] Checking if "addons-462156" exists ...
	I1210 22:27:11.247732    9998 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1210 22:27:11.248674    9998 addons.go:239] Setting addon default-storageclass=true in "addons-462156"
	I1210 22:27:11.248708    9998 host.go:66] Checking if "addons-462156" exists ...
	I1210 22:27:11.250317    9998 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1210 22:27:11.250332    9998 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1210 22:27:11.250356    9998 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1210 22:27:11.250380    9998 out.go:179]   - Using image docker.io/registry:3.0.0
	I1210 22:27:11.250336    9998 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-462156"
	I1210 22:27:11.250907    9998 host.go:66] Checking if "addons-462156" exists ...
	I1210 22:27:11.250390    9998 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	W1210 22:27:11.251376    9998 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1210 22:27:11.252133    9998 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 22:27:11.252151    9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1210 22:27:11.252978    9998 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 22:27:11.252982    9998 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 22:27:11.253371    9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1210 22:27:11.252986    9998 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1210 22:27:11.253459    9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1210 22:27:11.253012    9998 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1210 22:27:11.253577    9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1210 22:27:11.253022    9998 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 22:27:11.253645    9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1210 22:27:11.252937    9998 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1210 22:27:11.253836    9998 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1210 22:27:11.253838    9998 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1210 22:27:11.253838    9998 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1210 22:27:11.253846    9998 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1210 22:27:11.253870    9998 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 22:27:11.254705    9998 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1210 22:27:11.255089    9998 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1210 22:27:11.254193    9998 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 22:27:11.255182    9998 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 22:27:11.255510    9998 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 22:27:11.255536    9998 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 22:27:11.255555    9998 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 22:27:11.255567    9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1210 22:27:11.255519    9998 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1210 22:27:11.256267    9998 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1210 22:27:11.256286    9998 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1210 22:27:11.256307    9998 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1210 22:27:11.256361    9998 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 22:27:11.256373    9998 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 22:27:11.256731    9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 22:27:11.257028    9998 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1210 22:27:11.257045    9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1210 22:27:11.257762    9998 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1210 22:27:11.258958    9998 out.go:179]   - Using image docker.io/busybox:stable
	I1210 22:27:11.259053    9998 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1210 22:27:11.260807    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.261050    9998 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 22:27:11.261065    9998 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1210 22:27:11.261068    9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1210 22:27:11.262118    9998 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 22:27:11.262201    9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1210 22:27:11.262898    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:27:11.262942    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.263683    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.263860    9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
	I1210 22:27:11.264153    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.264182    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.264336    9998 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1210 22:27:11.264877    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.265499    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:27:11.265560    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.265662    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:27:11.265726    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:27:11.265759    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.265886    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.265951    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:27:11.265999    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.266220    9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
	I1210 22:27:11.266300    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.266650    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.266787    9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
	I1210 22:27:11.266888    9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
	I1210 22:27:11.267000    9998 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1210 22:27:11.267239    9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
	I1210 22:27:11.267990    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.268091    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.268174    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:27:11.268203    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.268327    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:27:11.268353    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.268458    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.268563    9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
	I1210 22:27:11.268852    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.268962    9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
	I1210 22:27:11.269182    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:27:11.269211    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.269278    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:27:11.269315    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.269532    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.269565    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:27:11.269592    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.269746    9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
	I1210 22:27:11.269841    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:27:11.269861    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.269883    9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
	I1210 22:27:11.269948    9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
	I1210 22:27:11.269974    9998 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1210 22:27:11.270343    9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
	I1210 22:27:11.270714    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:27:11.270745    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.270961    9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
	I1210 22:27:11.271307    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.271763    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:27:11.271777    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.271800    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.272019    9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
	I1210 22:27:11.272306    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:27:11.272333    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.272394    9998 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1210 22:27:11.272514    9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
	I1210 22:27:11.274835    9998 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1210 22:27:11.275968    9998 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1210 22:27:11.275984    9998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1210 22:27:11.278040    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.278526    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:27:11.278556    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:11.278707    9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
	I1210 22:27:12.083092    9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 22:27:12.089384    9998 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1210 22:27:12.089415    9998 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1210 22:27:12.108603    9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 22:27:12.110094    9998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 22:27:12.110117    9998 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 22:27:12.116697    9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1210 22:27:12.147041    9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 22:27:12.160103    9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 22:27:12.175353    9998 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1210 22:27:12.175382    9998 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1210 22:27:12.210459    9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 22:27:12.227717    9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1210 22:27:12.346473    9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 22:27:12.357239    9998 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1210 22:27:12.357265    9998 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1210 22:27:12.365743    9998 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 22:27:12.365764    9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1210 22:27:12.376747    9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 22:27:12.378130    9998 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1210 22:27:12.378149    9998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1210 22:27:12.595045    9998 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1210 22:27:12.595074    9998 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1210 22:27:12.623616    9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 22:27:12.848988    9998 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1210 22:27:12.849018    9998 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1210 22:27:12.950376    9998 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1210 22:27:12.950405    9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1210 22:27:13.009501    9998 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 22:27:13.009534    9998 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 22:27:13.087689    9998 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1210 22:27:13.087720    9998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1210 22:27:13.368087    9998 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1210 22:27:13.368114    9998 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1210 22:27:13.460638    9998 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1210 22:27:13.460663    9998 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1210 22:27:13.521676    9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1210 22:27:13.560893    9998 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 22:27:13.560918    9998 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 22:27:13.636025    9998 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1210 22:27:13.636053    9998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1210 22:27:13.773157    9998 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1210 22:27:13.773178    9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1210 22:27:13.845003    9998 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1210 22:27:13.845046    9998 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1210 22:27:13.931408    9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 22:27:14.052334    9998 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1210 22:27:14.052359    9998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1210 22:27:14.159970    9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1210 22:27:14.181920    9998 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 22:27:14.181948    9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1210 22:27:14.410122    9998 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1210 22:27:14.410153    9998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1210 22:27:14.496152    9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 22:27:14.808310    9998 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1210 22:27:14.808334    9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1210 22:27:15.137188    9998 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1210 22:27:15.137214    9998 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1210 22:27:15.748712    9998 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1210 22:27:15.748737    9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1210 22:27:16.126262    9998 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1210 22:27:16.126288    9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1210 22:27:16.461394    9998 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 22:27:16.461430    9998 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1210 22:27:16.738249    9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 22:27:18.779730    9998 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1210 22:27:18.782680    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:18.783156    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:27:18.783187    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:18.783345    9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
	I1210 22:27:19.258012    9998 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1210 22:27:19.417390    9998 addons.go:239] Setting addon gcp-auth=true in "addons-462156"
	I1210 22:27:19.417470    9998 host.go:66] Checking if "addons-462156" exists ...
	I1210 22:27:19.419693    9998 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1210 22:27:19.422550    9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:19.423074    9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
	I1210 22:27:19.423110    9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
	I1210 22:27:19.423324    9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
	I1210 22:27:20.295252    9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.212128045s)
	I1210 22:27:20.295284    9998 addons.go:495] Verifying addon ingress=true in "addons-462156"
	I1210 22:27:20.295364    9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.186732293s)
	I1210 22:27:20.295516    9998 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.185363131s)
	I1210 22:27:20.295558    9998 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.18542805s)
	I1210 22:27:20.295577    9998 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1210 22:27:20.295644    9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.17891331s)
	I1210 22:27:20.295707    9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.14863982s)
	I1210 22:27:20.295780    9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.135648198s)
	I1210 22:27:20.295808    9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.085322242s)
	I1210 22:27:20.295924    9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (8.068145496s)
	I1210 22:27:20.295974    9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.949470459s)
	I1210 22:27:20.295996    9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.919218159s)
	I1210 22:27:20.296062    9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.672421397s)
	I1210 22:27:20.296106    9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.774400599s)
	I1210 22:27:20.296128    9998 addons.go:495] Verifying addon registry=true in "addons-462156"
	I1210 22:27:20.296342    9998 node_ready.go:35] waiting up to 6m0s for node "addons-462156" to be "Ready" ...
	I1210 22:27:20.296248    9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.364802739s)
	I1210 22:27:20.296389    9998 addons.go:495] Verifying addon metrics-server=true in "addons-462156"
	I1210 22:27:20.296340    9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.136341258s)
	I1210 22:27:20.296915    9998 out.go:179] * Verifying ingress addon...
	I1210 22:27:20.297870    9998 out.go:179] * Verifying registry addon...
	I1210 22:27:20.297878    9998 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-462156 service yakd-dashboard -n yakd-dashboard
	
	I1210 22:27:20.299836    9998 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1210 22:27:20.300148    9998 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1210 22:27:20.329088    9998 node_ready.go:49] node "addons-462156" is "Ready"
	I1210 22:27:20.329112    9998 node_ready.go:38] duration metric: took 32.747065ms for node "addons-462156" to be "Ready" ...
	I1210 22:27:20.329124    9998 api_server.go:52] waiting for apiserver process to appear ...
	I1210 22:27:20.329176    9998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 22:27:20.333537    9998 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 22:27:20.333558    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:20.333646    9998 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1210 22:27:20.333664    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1210 22:27:20.384940    9998 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1210 22:27:20.577474    9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.081252281s)
	W1210 22:27:20.577529    9998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 22:27:20.577562    9998 retry.go:31] will retry after 230.344397ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 22:27:20.802239    9998 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-462156" context rescaled to 1 replicas
	I1210 22:27:20.808033    9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 22:27:20.808844    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:20.813250    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:21.392551    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:21.396505    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:21.438815    9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.700515364s)
	I1210 22:27:21.438856    9998 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.019132439s)
	I1210 22:27:21.438884    9998 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.109690734s)
	I1210 22:27:21.438902    9998 api_server.go:72] duration metric: took 10.198235938s to wait for apiserver process to appear ...
	I1210 22:27:21.438909    9998 api_server.go:88] waiting for apiserver healthz status ...
	I1210 22:27:21.438927    9998 api_server.go:253] Checking apiserver healthz at https://192.168.39.89:8443/healthz ...
	I1210 22:27:21.438859    9998 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-462156"
	I1210 22:27:21.440935    9998 out.go:179] * Verifying csi-hostpath-driver addon...
	I1210 22:27:21.440934    9998 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 22:27:21.442360    9998 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1210 22:27:21.442771    9998 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1210 22:27:21.443585    9998 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1210 22:27:21.443601    9998 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1210 22:27:21.502730    9998 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 22:27:21.502761    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:21.510915    9998 api_server.go:279] https://192.168.39.89:8443/healthz returned 200:
	ok
	I1210 22:27:21.521564    9998 api_server.go:141] control plane version: v1.34.2
	I1210 22:27:21.521625    9998 api_server.go:131] duration metric: took 82.706899ms to wait for apiserver health ...
	I1210 22:27:21.521638    9998 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 22:27:21.545701    9998 system_pods.go:59] 20 kube-system pods found
	I1210 22:27:21.545736    9998 system_pods.go:61] "amd-gpu-device-plugin-t84vv" [49aaeb54-4c35-4927-8903-28c074178738] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1210 22:27:21.545743    9998 system_pods.go:61] "coredns-66bc5c9577-4w6v4" [65e6ede4-ca2c-4eb9-a3d1-a4209459a010] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 22:27:21.545750    9998 system_pods.go:61] "coredns-66bc5c9577-lh65b" [35786400-7e12-45f3-a524-9b2ecdf2a3c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 22:27:21.545756    9998 system_pods.go:61] "csi-hostpath-attacher-0" [d7766fe6-b121-4def-b39d-a4e8148d691f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 22:27:21.545761    9998 system_pods.go:61] "csi-hostpath-resizer-0" [a77816c2-7bdc-4799-8c3e-f5e522b532fb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 22:27:21.545769    9998 system_pods.go:61] "csi-hostpathplugin-4ktdr" [983cebd7-5378-4d08-bbde-53a7d16d5e75] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 22:27:21.545773    9998 system_pods.go:61] "etcd-addons-462156" [6b1c99f1-0ade-4885-b63a-5cb4b0f77c96] Running
	I1210 22:27:21.545777    9998 system_pods.go:61] "kube-apiserver-addons-462156" [b596f37d-91a2-4b92-864c-dfa47885ddaf] Running
	I1210 22:27:21.545780    9998 system_pods.go:61] "kube-controller-manager-addons-462156" [f944b071-7099-4e85-895e-04dc4be2254d] Running
	I1210 22:27:21.545785    9998 system_pods.go:61] "kube-ingress-dns-minikube" [ebd516b6-c87a-40e2-a707-75ee9f2dfe60] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 22:27:21.545798    9998 system_pods.go:61] "kube-proxy-p4fsb" [7573193d-6d1a-4234-a12c-343613e99d1e] Running
	I1210 22:27:21.545803    9998 system_pods.go:61] "kube-scheduler-addons-462156" [0ce509bc-4d77-42f4-8f26-b0bb89f9489a] Running
	I1210 22:27:21.545807    9998 system_pods.go:61] "metrics-server-85b7d694d7-t4kn5" [72239687-ab58-4aee-b697-075933963bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 22:27:21.545814    9998 system_pods.go:61] "nvidia-device-plugin-daemonset-2knz8" [e3f636bc-8db9-4dc3-851a-f1331a2516e8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 22:27:21.545819    9998 system_pods.go:61] "registry-6b586f9694-hbcct" [f09be740-9c3b-4dc9-ae13-adfd16ccaec2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 22:27:21.545824    9998 system_pods.go:61] "registry-creds-764b6fb674-vz624" [a07caa13-412e-4ac4-a9a0-4ff42d41ed39] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 22:27:21.545829    9998 system_pods.go:61] "registry-proxy-bs796" [dd3cf5fe-024d-49ac-9781-1c16ce0767bd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 22:27:21.545834    9998 system_pods.go:61] "snapshot-controller-7d9fbc56b8-x7c9l" [2085f3db-d1b7-4f0b-8cc4-ee9d492ba05d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 22:27:21.545839    9998 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xgm5z" [27e6d8a8-39b6-461b-8a95-b5810cb5e347] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 22:27:21.545844    9998 system_pods.go:61] "storage-provisioner" [34acfc61-a61c-4021-9f68-bfd552138291] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 22:27:21.545850    9998 system_pods.go:74] duration metric: took 24.206939ms to wait for pod list to return data ...
	I1210 22:27:21.545859    9998 default_sa.go:34] waiting for default service account to be created ...
	I1210 22:27:21.554985    9998 default_sa.go:45] found service account: "default"
	I1210 22:27:21.555009    9998 default_sa.go:55] duration metric: took 9.145333ms for default service account to be created ...
	I1210 22:27:21.555019    9998 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 22:27:21.555589    9998 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1210 22:27:21.555624    9998 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1210 22:27:21.592667    9998 system_pods.go:86] 20 kube-system pods found
	I1210 22:27:21.592708    9998 system_pods.go:89] "amd-gpu-device-plugin-t84vv" [49aaeb54-4c35-4927-8903-28c074178738] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1210 22:27:21.592719    9998 system_pods.go:89] "coredns-66bc5c9577-4w6v4" [65e6ede4-ca2c-4eb9-a3d1-a4209459a010] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 22:27:21.592731    9998 system_pods.go:89] "coredns-66bc5c9577-lh65b" [35786400-7e12-45f3-a524-9b2ecdf2a3c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 22:27:21.592740    9998 system_pods.go:89] "csi-hostpath-attacher-0" [d7766fe6-b121-4def-b39d-a4e8148d691f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 22:27:21.592748    9998 system_pods.go:89] "csi-hostpath-resizer-0" [a77816c2-7bdc-4799-8c3e-f5e522b532fb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 22:27:21.592755    9998 system_pods.go:89] "csi-hostpathplugin-4ktdr" [983cebd7-5378-4d08-bbde-53a7d16d5e75] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 22:27:21.592759    9998 system_pods.go:89] "etcd-addons-462156" [6b1c99f1-0ade-4885-b63a-5cb4b0f77c96] Running
	I1210 22:27:21.592766    9998 system_pods.go:89] "kube-apiserver-addons-462156" [b596f37d-91a2-4b92-864c-dfa47885ddaf] Running
	I1210 22:27:21.592776    9998 system_pods.go:89] "kube-controller-manager-addons-462156" [f944b071-7099-4e85-895e-04dc4be2254d] Running
	I1210 22:27:21.592784    9998 system_pods.go:89] "kube-ingress-dns-minikube" [ebd516b6-c87a-40e2-a707-75ee9f2dfe60] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 22:27:21.592799    9998 system_pods.go:89] "kube-proxy-p4fsb" [7573193d-6d1a-4234-a12c-343613e99d1e] Running
	I1210 22:27:21.592807    9998 system_pods.go:89] "kube-scheduler-addons-462156" [0ce509bc-4d77-42f4-8f26-b0bb89f9489a] Running
	I1210 22:27:21.592816    9998 system_pods.go:89] "metrics-server-85b7d694d7-t4kn5" [72239687-ab58-4aee-b697-075933963bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 22:27:21.592823    9998 system_pods.go:89] "nvidia-device-plugin-daemonset-2knz8" [e3f636bc-8db9-4dc3-851a-f1331a2516e8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 22:27:21.592836    9998 system_pods.go:89] "registry-6b586f9694-hbcct" [f09be740-9c3b-4dc9-ae13-adfd16ccaec2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 22:27:21.592841    9998 system_pods.go:89] "registry-creds-764b6fb674-vz624" [a07caa13-412e-4ac4-a9a0-4ff42d41ed39] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 22:27:21.592848    9998 system_pods.go:89] "registry-proxy-bs796" [dd3cf5fe-024d-49ac-9781-1c16ce0767bd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 22:27:21.592856    9998 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x7c9l" [2085f3db-d1b7-4f0b-8cc4-ee9d492ba05d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 22:27:21.592869    9998 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xgm5z" [27e6d8a8-39b6-461b-8a95-b5810cb5e347] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 22:27:21.592878    9998 system_pods.go:89] "storage-provisioner" [34acfc61-a61c-4021-9f68-bfd552138291] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 22:27:21.592888    9998 system_pods.go:126] duration metric: took 37.863274ms to wait for k8s-apps to be running ...
	I1210 22:27:21.592899    9998 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 22:27:21.592948    9998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 22:27:21.713677    9998 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 22:27:21.713709    9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1210 22:27:21.797422    9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 22:27:21.813346    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:21.814860    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:21.949792    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:22.312455    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:22.314640    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:22.450356    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:22.800636    9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.992556298s)
	I1210 22:27:22.800713    9998 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.207738666s)
	I1210 22:27:22.800736    9998 system_svc.go:56] duration metric: took 1.207835402s WaitForService to wait for kubelet
	I1210 22:27:22.800751    9998 kubeadm.go:587] duration metric: took 11.560083814s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 22:27:22.800779    9998 node_conditions.go:102] verifying NodePressure condition ...
	I1210 22:27:22.808415    9998 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 22:27:22.808457    9998 node_conditions.go:123] node cpu capacity is 2
	I1210 22:27:22.808478    9998 node_conditions.go:105] duration metric: took 7.692838ms to run NodePressure ...
	I1210 22:27:22.808500    9998 start.go:242] waiting for startup goroutines ...
	I1210 22:27:22.811389    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:22.811857    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:22.963231    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:23.295847    9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.498379256s)
	I1210 22:27:23.296965    9998 addons.go:495] Verifying addon gcp-auth=true in "addons-462156"
	I1210 22:27:23.299247    9998 out.go:179] * Verifying gcp-auth addon...
	I1210 22:27:23.301799    9998 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1210 22:27:23.364109    9998 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1210 22:27:23.364131    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:23.364138    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:23.364265    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:23.454789    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:23.807963    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:23.808172    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:23.812002    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:23.953318    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:24.304218    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:24.306670    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:24.313852    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:24.447779    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:24.803655    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:24.804108    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:24.805871    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:24.946916    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:25.304318    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:25.305312    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:25.305362    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:25.447655    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:25.806088    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:25.811003    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:25.812360    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:25.953130    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:26.306891    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:26.308844    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:26.308880    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:26.446470    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:26.811259    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:26.813424    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:26.813508    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:26.948506    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:27.307099    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:27.309030    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:27.311138    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:27.449721    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:27.806833    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:27.806975    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:27.809246    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:27.948011    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:28.306558    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:28.306676    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:28.306860    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:28.449638    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:28.807961    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:28.807962    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:28.808030    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:28.947000    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:29.310172    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:29.310925    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:29.311472    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:29.447749    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:29.804614    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:29.804885    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:29.805180    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:29.947625    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:30.306941    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:30.307280    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:30.312022    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:30.446915    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:30.804070    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:30.804124    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:30.805363    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:30.947823    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:31.304407    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:31.304638    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:31.306046    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:31.447232    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:31.806932    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:31.807196    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:31.808261    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:31.949666    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:32.303317    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:32.305504    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:32.310394    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:32.447929    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:32.808448    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:32.808646    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:32.808805    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:32.949291    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:33.305087    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:33.305804    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:33.311416    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:33.447075    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:33.806763    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:33.806861    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:33.807549    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:33.949701    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:34.311842    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:34.311897    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:34.312124    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:34.447010    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:34.806561    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:34.808756    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:34.809118    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:34.947058    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:35.304794    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:35.304870    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:35.306239    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:35.447312    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:35.804966    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:35.804988    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:35.805848    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:35.946859    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:36.306276    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:36.306561    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:36.307054    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:36.446937    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:36.805017    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:36.805173    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:36.805917    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:36.947015    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:37.306956    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:37.307676    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:37.308565    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:37.449097    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:37.805067    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:37.805265    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:37.808098    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:37.951532    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:38.308675    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:38.308936    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:38.313481    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:38.448426    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:38.807182    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:38.808095    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:38.808532    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:38.947651    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:39.305612    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:39.306073    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:39.306235    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:39.446432    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:40.090959    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:40.091097    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:40.091112    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:40.091246    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:40.308383    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:40.308433    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:40.308610    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:40.446814    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:40.807524    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:40.807798    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:40.807892    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:40.947333    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:41.308033    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:41.308053    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:41.309490    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:41.448892    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:41.805904    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:41.806380    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:41.809476    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:41.946618    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:42.308673    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:42.313742    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:42.314160    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:42.448049    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:42.805431    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:42.805569    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:42.810523    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:42.948059    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:43.305965    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:43.308216    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:43.309734    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:43.447076    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:43.807719    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:43.807736    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:43.807978    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:43.948022    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:44.367104    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:44.367160    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:44.368842    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:44.510593    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:44.805367    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:44.805923    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:44.807985    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:44.950020    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:45.590418    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:45.590487    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:45.590609    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:45.590630    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:45.805521    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:45.806126    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:45.806552    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:45.948475    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:46.310010    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:46.311952    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:46.312373    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:46.450354    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:46.806664    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:46.808467    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:46.809001    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:46.948218    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:47.422945    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:47.427588    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:47.427770    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:47.447081    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:47.804919    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:47.805313    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:47.805604    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:47.948280    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:48.312902    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:48.313896    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:48.315724    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:48.449666    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:48.808247    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:48.810247    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:48.811321    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:48.951653    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:49.304762    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:49.306606    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:49.306638    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:49.447312    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:49.804928    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:49.805047    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:49.805049    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:49.947721    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:50.307976    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:50.310163    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:50.311400    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:50.447586    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:50.806428    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:50.806757    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:50.808267    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:50.947278    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:51.308493    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:51.308703    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:51.312865    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:51.447893    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:51.814171    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:51.818028    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:51.818054    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:51.947232    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:52.307670    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:52.308158    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:52.310044    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:52.447819    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:52.804248    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 22:27:52.804433    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:52.805932    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:52.946341    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:53.305748    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:53.306320    9998 kapi.go:107] duration metric: took 33.006167349s to wait for kubernetes.io/minikube-addons=registry ...
	I1210 22:27:53.306495    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:53.446977    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:53.803578    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:53.805320    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:53.947510    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:54.305179    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:54.310565    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:54.451496    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:54.804753    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:54.807470    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:54.953732    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:55.305265    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:55.305543    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:55.446891    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:55.803563    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:55.805372    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:55.946992    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:56.303515    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:56.306258    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:56.447286    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:56.805377    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:56.806416    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:56.946842    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:57.303842    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:57.309927    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:57.448300    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:57.809727    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:57.810761    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:57.949479    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:58.307191    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:58.307243    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:58.448332    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:58.806722    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:58.807747    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:58.947243    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:59.304354    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:59.306143    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:59.451668    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:27:59.812922    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:27:59.814421    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:27:59.947297    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:00.307169    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:00.307810    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:00.488913    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:00.805269    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:00.805266    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:00.947150    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:01.305801    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:01.308156    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:01.451838    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:01.803618    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:01.807083    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:01.947762    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:02.307508    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:02.308599    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:02.447532    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:02.994386    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:02.995723    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:02.995745    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:03.309344    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:03.312309    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:03.448427    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:03.807219    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:03.812778    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:03.950026    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:04.311426    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:04.311519    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:04.451619    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:04.806909    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:04.809799    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:04.948262    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:05.309938    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:05.311173    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:05.447252    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:05.807571    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:05.808247    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:05.946940    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:06.306677    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:06.307057    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:06.448731    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:06.806741    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:06.807313    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:06.947374    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:07.304730    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:07.305672    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:07.447841    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:07.803411    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:07.805339    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:07.948407    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:08.305715    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:08.307586    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:08.450714    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:08.806219    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:08.808114    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:08.949751    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:09.308294    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:09.310203    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:09.448369    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:09.808292    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:09.812843    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:09.948658    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:10.305498    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:10.309532    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:10.602399    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:10.807962    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:10.809894    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:10.946268    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:11.307845    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:11.310322    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:11.446906    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:11.805365    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:11.809196    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:11.949245    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:12.315582    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:12.317773    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:12.447883    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:12.809055    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:12.809594    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:12.953133    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:13.314994    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:13.315543    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:13.449089    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:13.809665    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:13.810356    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:13.950469    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:14.308075    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:14.309148    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:14.447838    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:14.807370    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:14.818229    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:14.948993    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:15.303869    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:15.305319    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:15.451762    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:15.805509    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:15.805694    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:15.948234    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:16.306025    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:16.312004    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:16.454039    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:16.808537    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:16.808583    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:16.950393    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:17.306760    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:17.307987    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:17.448135    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:17.806353    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:17.808304    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:17.947236    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:18.310139    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:18.311301    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:18.451042    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:18.804628    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:18.805680    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:18.948158    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:19.305790    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:19.306285    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:19.455998    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:19.807133    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:19.808402    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:19.948043    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:20.305654    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:20.307596    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:20.446191    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:20.806074    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:20.808255    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:20.946822    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:21.303803    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:21.305273    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:21.447178    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:21.806134    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:21.808845    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:21.949416    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:22.308306    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:22.308571    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:22.448323    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:22.806701    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:22.807815    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:22.945875    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:23.308001    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:23.308120    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:23.447010    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:23.808596    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:23.808813    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:23.947080    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:24.307860    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:24.308129    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:24.448009    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 22:28:24.809143    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:24.812455    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:24.947342    9998 kapi.go:107] duration metric: took 1m3.504568389s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1210 22:28:25.307954    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:25.309056    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:25.804083    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:25.806691    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:26.304214    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:26.307914    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:26.809275    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:26.813377    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:27.308924    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:27.310827    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:27.811419    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:27.814269    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:28.305010    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:28.309055    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:28.807579    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:28.808569    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:29.358463    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:29.360598    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:29.805754    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:29.807057    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:30.306167    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:30.306825    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:30.807626    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:30.808893    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:31.305008    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:31.306579    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:31.805116    9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 22:28:31.806055    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:32.306750    9998 kapi.go:107] duration metric: took 1m12.006912383s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1210 22:28:32.306802    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:32.805892    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:33.310478    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:33.804775    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:34.306828    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:34.805272    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:35.306495    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:35.805913    9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 22:28:36.308868    9998 kapi.go:107] duration metric: took 1m13.007064708s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1210 22:28:36.310840    9998 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-462156 cluster.
	I1210 22:28:36.312304    9998 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1210 22:28:36.313865    9998 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1210 22:28:36.315293    9998 out.go:179] * Enabled addons: cloud-spanner, registry-creds, storage-provisioner, inspektor-gadget, nvidia-device-plugin, amd-gpu-device-plugin, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1210 22:28:36.316567    9998 addons.go:530] duration metric: took 1m25.0758813s for enable addons: enabled=[cloud-spanner registry-creds storage-provisioner inspektor-gadget nvidia-device-plugin amd-gpu-device-plugin ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1210 22:28:36.316608    9998 start.go:247] waiting for cluster config update ...
	I1210 22:28:36.316632    9998 start.go:256] writing updated cluster config ...
	I1210 22:28:36.316919    9998 ssh_runner.go:195] Run: rm -f paused
	I1210 22:28:36.324369    9998 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 22:28:36.409892    9998 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4w6v4" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:28:36.416205    9998 pod_ready.go:94] pod "coredns-66bc5c9577-4w6v4" is "Ready"
	I1210 22:28:36.416245    9998 pod_ready.go:86] duration metric: took 6.324769ms for pod "coredns-66bc5c9577-4w6v4" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:28:36.418579    9998 pod_ready.go:83] waiting for pod "etcd-addons-462156" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:28:36.427344    9998 pod_ready.go:94] pod "etcd-addons-462156" is "Ready"
	I1210 22:28:36.427368    9998 pod_ready.go:86] duration metric: took 8.767368ms for pod "etcd-addons-462156" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:28:36.432617    9998 pod_ready.go:83] waiting for pod "kube-apiserver-addons-462156" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:28:36.439042    9998 pod_ready.go:94] pod "kube-apiserver-addons-462156" is "Ready"
	I1210 22:28:36.439066    9998 pod_ready.go:86] duration metric: took 6.427209ms for pod "kube-apiserver-addons-462156" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:28:36.444417    9998 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-462156" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:28:36.728875    9998 pod_ready.go:94] pod "kube-controller-manager-addons-462156" is "Ready"
	I1210 22:28:36.728901    9998 pod_ready.go:86] duration metric: took 284.466578ms for pod "kube-controller-manager-addons-462156" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:28:36.928614    9998 pod_ready.go:83] waiting for pod "kube-proxy-p4fsb" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:28:37.328940    9998 pod_ready.go:94] pod "kube-proxy-p4fsb" is "Ready"
	I1210 22:28:37.328963    9998 pod_ready.go:86] duration metric: took 400.313801ms for pod "kube-proxy-p4fsb" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:28:37.528860    9998 pod_ready.go:83] waiting for pod "kube-scheduler-addons-462156" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:28:37.929218    9998 pod_ready.go:94] pod "kube-scheduler-addons-462156" is "Ready"
	I1210 22:28:37.929241    9998 pod_ready.go:86] duration metric: took 400.351455ms for pod "kube-scheduler-addons-462156" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 22:28:37.929251    9998 pod_ready.go:40] duration metric: took 1.604857077s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 22:28:37.978192    9998 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 22:28:37.979994    9998 out.go:179] * Done! kubectl is now configured to use "addons-462156" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.701491076Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cec88046-b786-4d0a-8fba-8c3beeca48d4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.701617352Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cec88046-b786-4d0a-8fba-8c3beeca48d4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.702006772Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b558daaacc029aa25cc796bed774b72efd790624e5a5eb1383d2b97c309562ea,PodSandboxId:2e03a0ab62889cdacc0a187105e5216e055dc246f043a2074ee25b869a6087bc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765405762742653064,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38540fef-532f-483f-9d53-b8ff5b9bcf5b,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:834fb0ab7b6faea49641985bcb2768772e0944979420ad46d3ca1e1849e35ec3,PodSandboxId:b20c392c9cca82b15edd7d626b6e7202f043a838a337cbad3dfec804e1de6794,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765405722244575883,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fad4956f-5563-487f-ab71-bb145da43547,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf0c21ab7b3ceda9377011aaac403431726321f7934c3a9f4981f1bf7cfe83e,PodSandboxId:be846878eebfd19267f592532e82e2397dbf525327fdc3fd2493752faebe326c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765405711006992699,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-rr58f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 72d41b3e-1e1f-4171-8195-d86b2e7c3285,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9a644cccdec8282db66a04fddd52c44529cdef25a775c060ec9d1e26a8b9b3a4,PodSandboxId:18528e7312776991dff57d727f058775a2a4400f1107cd4b5b3c65fa1bee8fa9,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765405696520924050,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-w8rwc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55912f2f-f973-48e6-871a-fd13b63514c4,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b29d29e7948342fff815c5cb12c3e41bb2776ee2f18e6749d5cda7a619de514,PodSandboxId:a066e40961becacdfb96f5f072ea4d9506f7c29aafd4e32985118c43ebe1cde9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765405694916501907,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-w5dlb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: aced33d0-e4e9-4718-9577-433f9aeb7d97,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1370f0b5c5b78b1ee6bd16a460504d25b8c4f5e057577657691cf3ea6fc2309,PodSandboxId:ca5ee6e3983d7a59d851db656bb89679bdd47bfcf189b96af6835249c511890e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765405668005885072,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd516b6-c87a-40e2-a707-75ee9f2dfe60,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:698a0a7d5083be5b6f12f498ec6941bfe4d800bcc73ff3529861720066cab23f,PodSandboxId:a10e70dd465a1c932ac585a10868626971dfbe854b007cebd96be9277e8922b9,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765405642250485067,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-t84vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49aaeb54-4c35-4927-8903-28c074178738,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed7661ceb3c5bda624d44999ef4385d224a490e97b78acc1a182006bd21c959b,PodSandboxId:782f9615fd7c0098c379fc3cbf273620ad95ae1066f773944104666dcedc8cfb,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765405640681515957,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34acfc61-a61c-4021-9f68-bfd552138291,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7188c2ead6e38325ac95ba892289d01348cf9c9c155fa6a7e65a29bc07232a88,PodSandboxId:7de6035cd917ec2d92eb275918a5cb16052b639c9ac8857abf294f378014aad1,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765405632910159710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4w6v4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65e6ede4-ca2c-4eb9-a3d1-a4209459a010,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b613868b0c2037100d4240def2a469aa4753caac82d15afc692242da9ed19a,PodSandboxId:cfe3f2b53d1c6e61538b4d56304bbc9f079b452faebef54026cfe3c209329ebd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765405631948407397,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4fsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7573193d-6d1a-4234-a12c-343613e99d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:865cd4374151004f1fd53eb21d2a5fc6ed8397dd0a3f446acf66d7d8321e5e0d,PodSandboxId:b43cba2a2d554214afaba89e44951ce25b318335a4a397376583c2813a80d78a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765405620448063306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47c0342be96a92673c2f5b0fb1b1cff2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e44079bd7e7a9d94b59112a7952c6c96c3ce5d9d069e8bc423adb650780aa03,PodSandboxId:48ebcb08bc37e596753e845029a35dc214e17777a19f8952bb713d2cd5415744,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765405620450539416,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbfb5a290986c3ba5f3632e753de9b5e,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae57e603fedc97c01344e449201c4da3c80dcedc1fa80b5dd388d358edf71cf,PodSandboxId:cbbc34c759b76b9dcaeb037f35e80a69f71faaeac9311910ae44734901e6d7b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765405620421874969,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e96f1f7b4c35a0a45ef34
c4272223c41,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802193904fcda7303eff4ed00463c2bcbb98e5b541274554f07799d55e0a38f0,PodSandboxId:c5876d12b7d7fd1371142b46cccca44e976df7c74bd2ea6b47bbcb78f0199842,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765405620403263065,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe8e1ab4407bcf4ce945d6cc19196b5,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cec88046-b786-4d0a-8fba-8c3beeca48d4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.719537086Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/1.0" file="docker/docker_client.go:631"
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.741680559Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d998ff85-eaa9-414d-b434-6b5552e70d9b name=/runtime.v1.RuntimeService/Version
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.741979967Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d998ff85-eaa9-414d-b434-6b5552e70d9b name=/runtime.v1.RuntimeService/Version
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.743640285Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=678c4570-6859-4d0e-8d70-31a304c8e44b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.745272322Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765405905745244008,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=678c4570-6859-4d0e-8d70-31a304c8e44b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.746172976Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a488a87-91f0-402f-9855-8b1f3d00c13a name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.746229691Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a488a87-91f0-402f-9855-8b1f3d00c13a name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.746523328Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b558daaacc029aa25cc796bed774b72efd790624e5a5eb1383d2b97c309562ea,PodSandboxId:2e03a0ab62889cdacc0a187105e5216e055dc246f043a2074ee25b869a6087bc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765405762742653064,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38540fef-532f-483f-9d53-b8ff5b9bcf5b,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:834fb0ab7b6faea49641985bcb2768772e0944979420ad46d3ca1e1849e35ec3,PodSandboxId:b20c392c9cca82b15edd7d626b6e7202f043a838a337cbad3dfec804e1de6794,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765405722244575883,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fad4956f-5563-487f-ab71-bb145da43547,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf0c21ab7b3ceda9377011aaac403431726321f7934c3a9f4981f1bf7cfe83e,PodSandboxId:be846878eebfd19267f592532e82e2397dbf525327fdc3fd2493752faebe326c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765405711006992699,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-rr58f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 72d41b3e-1e1f-4171-8195-d86b2e7c3285,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9a644cccdec8282db66a04fddd52c44529cdef25a775c060ec9d1e26a8b9b3a4,PodSandboxId:18528e7312776991dff57d727f058775a2a4400f1107cd4b5b3c65fa1bee8fa9,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765405696520924050,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-w8rwc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55912f2f-f973-48e6-871a-fd13b63514c4,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b29d29e7948342fff815c5cb12c3e41bb2776ee2f18e6749d5cda7a619de514,PodSandboxId:a066e40961becacdfb96f5f072ea4d9506f7c29aafd4e32985118c43ebe1cde9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765405694916501907,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-w5dlb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: aced33d0-e4e9-4718-9577-433f9aeb7d97,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1370f0b5c5b78b1ee6bd16a460504d25b8c4f5e057577657691cf3ea6fc2309,PodSandboxId:ca5ee6e3983d7a59d851db656bb89679bdd47bfcf189b96af6835249c511890e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765405668005885072,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd516b6-c87a-40e2-a707-75ee9f2dfe60,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:698a0a7d5083be5b6f12f498ec6941bfe4d800bcc73ff3529861720066cab23f,PodSandboxId:a10e70dd465a1c932ac585a10868626971dfbe854b007cebd96be9277e8922b9,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765405642250485067,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-t84vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49aaeb54-4c35-4927-8903-28c074178738,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed7661ceb3c5bda624d44999ef4385d224a490e97b78acc1a182006bd21c959b,PodSandboxId:782f9615fd7c0098c379fc3cbf273620ad95ae1066f773944104666dcedc8cfb,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765405640681515957,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34acfc61-a61c-4021-9f68-bfd552138291,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7188c2ead6e38325ac95ba892289d01348cf9c9c155fa6a7e65a29bc07232a88,PodSandboxId:7de6035cd917ec2d92eb275918a5cb16052b639c9ac8857abf294f378014aad1,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765405632910159710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4w6v4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65e6ede4-ca2c-4eb9-a3d1-a4209459a010,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b613868b0c2037100d4240def2a469aa4753caac82d15afc692242da9ed19a,PodSandboxId:cfe3f2b53d1c6e61538b4d56304bbc9f079b452faebef54026cfe3c209329ebd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765405631948407397,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4fsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7573193d-6d1a-4234-a12c-343613e99d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:865cd4374151004f1fd53eb21d2a5fc6ed8397dd0a3f446acf66d7d8321e5e0d,PodSandboxId:b43cba2a2d554214afaba89e44951ce25b318335a4a397376583c2813a80d78a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765405620448063306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47c0342be96a92673c2f5b0fb1b1cff2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e44079bd7e7a9d94b59112a7952c6c96c3ce5d9d069e8bc423adb650780aa03,PodSandboxId:48ebcb08bc37e596753e845029a35dc214e17777a19f8952bb713d2cd5415744,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765405620450539416,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbfb5a290986c3ba5f3632e753de9b5e,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae57e603fedc97c01344e449201c4da3c80dcedc1fa80b5dd388d358edf71cf,PodSandboxId:cbbc34c759b76b9dcaeb037f35e80a69f71faaeac9311910ae44734901e6d7b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765405620421874969,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e96f1f7b4c35a0a45ef34
c4272223c41,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802193904fcda7303eff4ed00463c2bcbb98e5b541274554f07799d55e0a38f0,PodSandboxId:c5876d12b7d7fd1371142b46cccca44e976df7c74bd2ea6b47bbcb78f0199842,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765405620403263065,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe8e1ab4407bcf4ce945d6cc19196b5,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a488a87-91f0-402f-9855-8b1f3d00c13a name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.778688513Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ac178b67-1ded-45d8-b113-5e1b49e946c2 name=/runtime.v1.RuntimeService/Version
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.778866528Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac178b67-1ded-45d8-b113-5e1b49e946c2 name=/runtime.v1.RuntimeService/Version
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.780034972Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=83cab9c8-e543-45de-b027-3aadb774d5b0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.781468557Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765405905781438966,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=83cab9c8-e543-45de-b027-3aadb774d5b0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.782479173Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4197aa2-e599-441d-a608-294e1705ce3c name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.782541454Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4197aa2-e599-441d-a608-294e1705ce3c name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.782930164Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b558daaacc029aa25cc796bed774b72efd790624e5a5eb1383d2b97c309562ea,PodSandboxId:2e03a0ab62889cdacc0a187105e5216e055dc246f043a2074ee25b869a6087bc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765405762742653064,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38540fef-532f-483f-9d53-b8ff5b9bcf5b,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:834fb0ab7b6faea49641985bcb2768772e0944979420ad46d3ca1e1849e35ec3,PodSandboxId:b20c392c9cca82b15edd7d626b6e7202f043a838a337cbad3dfec804e1de6794,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765405722244575883,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fad4956f-5563-487f-ab71-bb145da43547,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf0c21ab7b3ceda9377011aaac403431726321f7934c3a9f4981f1bf7cfe83e,PodSandboxId:be846878eebfd19267f592532e82e2397dbf525327fdc3fd2493752faebe326c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765405711006992699,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-rr58f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 72d41b3e-1e1f-4171-8195-d86b2e7c3285,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9a644cccdec8282db66a04fddd52c44529cdef25a775c060ec9d1e26a8b9b3a4,PodSandboxId:18528e7312776991dff57d727f058775a2a4400f1107cd4b5b3c65fa1bee8fa9,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765405696520924050,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-w8rwc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55912f2f-f973-48e6-871a-fd13b63514c4,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b29d29e7948342fff815c5cb12c3e41bb2776ee2f18e6749d5cda7a619de514,PodSandboxId:a066e40961becacdfb96f5f072ea4d9506f7c29aafd4e32985118c43ebe1cde9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765405694916501907,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-w5dlb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: aced33d0-e4e9-4718-9577-433f9aeb7d97,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1370f0b5c5b78b1ee6bd16a460504d25b8c4f5e057577657691cf3ea6fc2309,PodSandboxId:ca5ee6e3983d7a59d851db656bb89679bdd47bfcf189b96af6835249c511890e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765405668005885072,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd516b6-c87a-40e2-a707-75ee9f2dfe60,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:698a0a7d5083be5b6f12f498ec6941bfe4d800bcc73ff3529861720066cab23f,PodSandboxId:a10e70dd465a1c932ac585a10868626971dfbe854b007cebd96be9277e8922b9,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765405642250485067,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-t84vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49aaeb54-4c35-4927-8903-28c074178738,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed7661ceb3c5bda624d44999ef4385d224a490e97b78acc1a182006bd21c959b,PodSandboxId:782f9615fd7c0098c379fc3cbf273620ad95ae1066f773944104666dcedc8cfb,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765405640681515957,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34acfc61-a61c-4021-9f68-bfd552138291,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7188c2ead6e38325ac95ba892289d01348cf9c9c155fa6a7e65a29bc07232a88,PodSandboxId:7de6035cd917ec2d92eb275918a5cb16052b639c9ac8857abf294f378014aad1,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765405632910159710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4w6v4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65e6ede4-ca2c-4eb9-a3d1-a4209459a010,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b613868b0c2037100d4240def2a469aa4753caac82d15afc692242da9ed19a,PodSandboxId:cfe3f2b53d1c6e61538b4d56304bbc9f079b452faebef54026cfe3c209329ebd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765405631948407397,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4fsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7573193d-6d1a-4234-a12c-343613e99d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:865cd4374151004f1fd53eb21d2a5fc6ed8397dd0a3f446acf66d7d8321e5e0d,PodSandboxId:b43cba2a2d554214afaba89e44951ce25b318335a4a397376583c2813a80d78a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765405620448063306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47c0342be96a92673c2f5b0fb1b1cff2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e44079bd7e7a9d94b59112a7952c6c96c3ce5d9d069e8bc423adb650780aa03,PodSandboxId:48ebcb08bc37e596753e845029a35dc214e17777a19f8952bb713d2cd5415744,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765405620450539416,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbfb5a290986c3ba5f3632e753de9b5e,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae57e603fedc97c01344e449201c4da3c80dcedc1fa80b5dd388d358edf71cf,PodSandboxId:cbbc34c759b76b9dcaeb037f35e80a69f71faaeac9311910ae44734901e6d7b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765405620421874969,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e96f1f7b4c35a0a45ef34
c4272223c41,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802193904fcda7303eff4ed00463c2bcbb98e5b541274554f07799d55e0a38f0,PodSandboxId:c5876d12b7d7fd1371142b46cccca44e976df7c74bd2ea6b47bbcb78f0199842,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765405620403263065,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe8e1ab4407bcf4ce945d6cc19196b5,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4197aa2-e599-441d-a608-294e1705ce3c name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.816550452Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=62089880-f95e-40a0-b1f6-a2e708970eae name=/runtime.v1.RuntimeService/Version
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.816628196Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=62089880-f95e-40a0-b1f6-a2e708970eae name=/runtime.v1.RuntimeService/Version
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.818313533Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f428fae7-c8a7-4b3a-8969-d964d2aa6e59 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.819565678Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765405905819536493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f428fae7-c8a7-4b3a-8969-d964d2aa6e59 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.820628008Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0173e7f0-4c77-4949-815b-5d20a8f0e029 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.820695944Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0173e7f0-4c77-4949-815b-5d20a8f0e029 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.821075273Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b558daaacc029aa25cc796bed774b72efd790624e5a5eb1383d2b97c309562ea,PodSandboxId:2e03a0ab62889cdacc0a187105e5216e055dc246f043a2074ee25b869a6087bc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765405762742653064,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38540fef-532f-483f-9d53-b8ff5b9bcf5b,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:834fb0ab7b6faea49641985bcb2768772e0944979420ad46d3ca1e1849e35ec3,PodSandboxId:b20c392c9cca82b15edd7d626b6e7202f043a838a337cbad3dfec804e1de6794,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765405722244575883,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fad4956f-5563-487f-ab71-bb145da43547,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf0c21ab7b3ceda9377011aaac403431726321f7934c3a9f4981f1bf7cfe83e,PodSandboxId:be846878eebfd19267f592532e82e2397dbf525327fdc3fd2493752faebe326c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765405711006992699,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-rr58f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 72d41b3e-1e1f-4171-8195-d86b2e7c3285,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9a644cccdec8282db66a04fddd52c44529cdef25a775c060ec9d1e26a8b9b3a4,PodSandboxId:18528e7312776991dff57d727f058775a2a4400f1107cd4b5b3c65fa1bee8fa9,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765405696520924050,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-w8rwc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55912f2f-f973-48e6-871a-fd13b63514c4,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b29d29e7948342fff815c5cb12c3e41bb2776ee2f18e6749d5cda7a619de514,PodSandboxId:a066e40961becacdfb96f5f072ea4d9506f7c29aafd4e32985118c43ebe1cde9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765405694916501907,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-w5dlb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: aced33d0-e4e9-4718-9577-433f9aeb7d97,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1370f0b5c5b78b1ee6bd16a460504d25b8c4f5e057577657691cf3ea6fc2309,PodSandboxId:ca5ee6e3983d7a59d851db656bb89679bdd47bfcf189b96af6835249c511890e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765405668005885072,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd516b6-c87a-40e2-a707-75ee9f2dfe60,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:698a0a7d5083be5b6f12f498ec6941bfe4d800bcc73ff3529861720066cab23f,PodSandboxId:a10e70dd465a1c932ac585a10868626971dfbe854b007cebd96be9277e8922b9,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765405642250485067,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-t84vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49aaeb54-4c35-4927-8903-28c074178738,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed7661ceb3c5bda624d44999ef4385d224a490e97b78acc1a182006bd21c959b,PodSandboxId:782f9615fd7c0098c379fc3cbf273620ad95ae1066f773944104666dcedc8cfb,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765405640681515957,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34acfc61-a61c-4021-9f68-bfd552138291,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7188c2ead6e38325ac95ba892289d01348cf9c9c155fa6a7e65a29bc07232a88,PodSandboxId:7de6035cd917ec2d92eb275918a5cb16052b639c9ac8857abf294f378014aad1,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765405632910159710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4w6v4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65e6ede4-ca2c-4eb9-a3d1-a4209459a010,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b613868b0c2037100d4240def2a469aa4753caac82d15afc692242da9ed19a,PodSandboxId:cfe3f2b53d1c6e61538b4d56304bbc9f079b452faebef54026cfe3c209329ebd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765405631948407397,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4fsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7573193d-6d1a-4234-a12c-343613e99d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:865cd4374151004f1fd53eb21d2a5fc6ed8397dd0a3f446acf66d7d8321e5e0d,PodSandboxId:b43cba2a2d554214afaba89e44951ce25b318335a4a397376583c2813a80d78a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765405620448063306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47c0342be96a92673c2f5b0fb1b1cff2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e44079bd7e7a9d94b59112a7952c6c96c3ce5d9d069e8bc423adb650780aa03,PodSandboxId:48ebcb08bc37e596753e845029a35dc214e17777a19f8952bb713d2cd5415744,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765405620450539416,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbfb5a290986c3ba5f3632e753de9b5e,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae57e603fedc97c01344e449201c4da3c80dcedc1fa80b5dd388d358edf71cf,PodSandboxId:cbbc34c759b76b9dcaeb037f35e80a69f71faaeac9311910ae44734901e6d7b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765405620421874969,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e96f1f7b4c35a0a45ef34
c4272223c41,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802193904fcda7303eff4ed00463c2bcbb98e5b541274554f07799d55e0a38f0,PodSandboxId:c5876d12b7d7fd1371142b46cccca44e976df7c74bd2ea6b47bbcb78f0199842,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765405620403263065,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe8e1ab4407bcf4ce945d6cc19196b5,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0173e7f0-4c77-4949-815b-5d20a8f0e029 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b558daaacc029       public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff                           2 minutes ago       Running             nginx                     0                   2e03a0ab62889       nginx                                       default
	834fb0ab7b6fa       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   b20c392c9cca8       busybox                                     default
	ccf0c21ab7b3c       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad             3 minutes ago       Running             controller                0                   be846878eebfd       ingress-nginx-controller-85d4c799dd-rr58f   ingress-nginx
	9a644cccdec82       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              patch                     0                   18528e7312776       ingress-nginx-admission-patch-w8rwc         ingress-nginx
	6b29d29e79483       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              create                    0                   a066e40961bec       ingress-nginx-admission-create-w5dlb        ingress-nginx
	c1370f0b5c5b7       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago       Running             minikube-ingress-dns      0                   ca5ee6e3983d7       kube-ingress-dns-minikube                   kube-system
	698a0a7d5083b       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   a10e70dd465a1       amd-gpu-device-plugin-t84vv                 kube-system
	ed7661ceb3c5b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   782f9615fd7c0       storage-provisioner                         kube-system
	7188c2ead6e38       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   7de6035cd917e       coredns-66bc5c9577-4w6v4                    kube-system
	87b613868b0c2       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                             4 minutes ago       Running             kube-proxy                0                   cfe3f2b53d1c6       kube-proxy-p4fsb                            kube-system
	5e44079bd7e7a       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                             4 minutes ago       Running             kube-scheduler            0                   48ebcb08bc37e       kube-scheduler-addons-462156                kube-system
	865cd43741510       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             4 minutes ago       Running             etcd                      0                   b43cba2a2d554       etcd-addons-462156                          kube-system
	3ae57e603fedc       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                             4 minutes ago       Running             kube-controller-manager   0                   cbbc34c759b76       kube-controller-manager-addons-462156       kube-system
	802193904fcda       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                             4 minutes ago       Running             kube-apiserver            0                   c5876d12b7d7f       kube-apiserver-addons-462156                kube-system
	
	
	==> coredns [7188c2ead6e38325ac95ba892289d01348cf9c9c155fa6a7e65a29bc07232a88] <==
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	[INFO] Reloading complete
	[INFO] 127.0.0.1:51623 - 5125 "HINFO IN 4691920241162746704.8558605234871798027. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026712188s
	[INFO] 10.244.0.23:44049 - 60332 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000211225s
	[INFO] 10.244.0.23:35855 - 59323 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0008102s
	[INFO] 10.244.0.23:47734 - 10675 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000186952s
	[INFO] 10.244.0.23:49750 - 26562 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000094608s
	[INFO] 10.244.0.23:54807 - 759 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00007952s
	[INFO] 10.244.0.23:53625 - 36565 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000072255s
	[INFO] 10.244.0.23:36915 - 29260 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001812467s
	[INFO] 10.244.0.23:42340 - 21935 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.001366896s
	[INFO] 10.244.0.27:44289 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001642496s
	[INFO] 10.244.0.27:46516 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00014987s
	
	
	==> describe nodes <==
	Name:               addons-462156
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-462156
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=addons-462156
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T22_27_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-462156
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 22:27:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-462156
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 22:31:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 22:29:39 +0000   Wed, 10 Dec 2025 22:27:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 22:29:39 +0000   Wed, 10 Dec 2025 22:27:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 22:29:39 +0000   Wed, 10 Dec 2025 22:27:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 22:29:39 +0000   Wed, 10 Dec 2025 22:27:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    addons-462156
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 04673162af0d46ce874ca95dda098d35
	  System UUID:                04673162-af0d-46ce-874c-a95dda098d35
	  Boot ID:                    7f940656-edd7-4642-87ac-d557629f3ef4
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     hello-world-app-5d498dc89-p9mq7              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-rr58f    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m27s
	  kube-system                 amd-gpu-device-plugin-t84vv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 coredns-66bc5c9577-4w6v4                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m35s
	  kube-system                 etcd-addons-462156                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m42s
	  kube-system                 kube-apiserver-addons-462156                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 kube-controller-manager-addons-462156        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-proxy-p4fsb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-scheduler-addons-462156                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m33s                  kube-proxy       
	  Normal  Starting                 4m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m47s (x8 over 4m47s)  kubelet          Node addons-462156 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m47s (x8 over 4m47s)  kubelet          Node addons-462156 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m47s (x7 over 4m47s)  kubelet          Node addons-462156 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m40s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m40s                  kubelet          Node addons-462156 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m40s                  kubelet          Node addons-462156 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m40s                  kubelet          Node addons-462156 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m39s                  kubelet          Node addons-462156 status is now: NodeReady
	  Normal  RegisteredNode           4m36s                  node-controller  Node addons-462156 event: Registered Node addons-462156 in Controller
	
	
	==> dmesg <==
	[  +0.000014] kauditd_printk_skb: 276 callbacks suppressed
	[  +3.561449] kauditd_printk_skb: 407 callbacks suppressed
	[  +5.796427] kauditd_printk_skb: 5 callbacks suppressed
	[ +12.966961] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.873850] kauditd_printk_skb: 26 callbacks suppressed
	[Dec10 22:28] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.136584] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.096414] kauditd_printk_skb: 20 callbacks suppressed
	[  +1.135139] kauditd_printk_skb: 200 callbacks suppressed
	[  +3.000501] kauditd_printk_skb: 113 callbacks suppressed
	[  +0.000181] kauditd_printk_skb: 59 callbacks suppressed
	[  +5.506434] kauditd_printk_skb: 53 callbacks suppressed
	[  +3.328725] kauditd_printk_skb: 47 callbacks suppressed
	[ +10.734521] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.919858] kauditd_printk_skb: 22 callbacks suppressed
	[Dec10 22:29] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.000193] kauditd_printk_skb: 108 callbacks suppressed
	[  +0.559663] kauditd_printk_skb: 167 callbacks suppressed
	[  +1.027041] kauditd_printk_skb: 181 callbacks suppressed
	[  +6.213864] kauditd_printk_skb: 101 callbacks suppressed
	[  +5.000933] kauditd_printk_skb: 16 callbacks suppressed
	[  +0.000556] kauditd_printk_skb: 16 callbacks suppressed
	[  +0.863509] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.703385] kauditd_printk_skb: 48 callbacks suppressed
	[Dec10 22:31] kauditd_printk_skb: 71 callbacks suppressed
	
	
	==> etcd [865cd4374151004f1fd53eb21d2a5fc6ed8397dd0a3f446acf66d7d8321e5e0d] <==
	{"level":"info","ts":"2025-12-10T22:28:14.775406Z","caller":"traceutil/trace.go:172","msg":"trace[764113813] linearizableReadLoop","detail":"{readStateIndex:1098; appliedIndex:1098; }","duration":"118.256612ms","start":"2025-12-10T22:28:14.657134Z","end":"2025-12-10T22:28:14.775390Z","steps":["trace[764113813] 'read index received'  (duration: 118.250897ms)","trace[764113813] 'applied index is now lower than readState.Index'  (duration: 5.03µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T22:28:14.775532Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.398353ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T22:28:14.775549Z","caller":"traceutil/trace.go:172","msg":"trace[1391575428] range","detail":"{range_begin:/registry/roles; range_end:; response_count:0; response_revision:1071; }","duration":"118.433453ms","start":"2025-12-10T22:28:14.657111Z","end":"2025-12-10T22:28:14.775544Z","steps":["trace[1391575428] 'agreement among raft nodes before linearized reading'  (duration: 118.365741ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T22:28:14.776173Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"116.717768ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/servicecidrs\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T22:28:14.776353Z","caller":"traceutil/trace.go:172","msg":"trace[1073129533] range","detail":"{range_begin:/registry/servicecidrs; range_end:; response_count:0; response_revision:1072; }","duration":"116.968163ms","start":"2025-12-10T22:28:14.659374Z","end":"2025-12-10T22:28:14.776342Z","steps":["trace[1073129533] 'agreement among raft nodes before linearized reading'  (duration: 116.553652ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T22:28:14.776440Z","caller":"traceutil/trace.go:172","msg":"trace[1707107323] transaction","detail":"{read_only:false; response_revision:1072; number_of_response:1; }","duration":"192.964005ms","start":"2025-12-10T22:28:14.583465Z","end":"2025-12-10T22:28:14.776429Z","steps":["trace[1707107323] 'process raft request'  (duration: 192.254518ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T22:28:19.711794Z","caller":"traceutil/trace.go:172","msg":"trace[1925148551] transaction","detail":"{read_only:false; response_revision:1123; number_of_response:1; }","duration":"103.235058ms","start":"2025-12-10T22:28:19.608495Z","end":"2025-12-10T22:28:19.711730Z","steps":["trace[1925148551] 'process raft request'  (duration: 100.63493ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T22:28:29.352491Z","caller":"traceutil/trace.go:172","msg":"trace[1288898758] transaction","detail":"{read_only:false; response_revision:1171; number_of_response:1; }","duration":"156.998921ms","start":"2025-12-10T22:28:29.195479Z","end":"2025-12-10T22:28:29.352477Z","steps":["trace[1288898758] 'process raft request'  (duration: 156.913639ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T22:28:30.537168Z","caller":"traceutil/trace.go:172","msg":"trace[1990854520] linearizableReadLoop","detail":"{readStateIndex:1202; appliedIndex:1202; }","duration":"100.222766ms","start":"2025-12-10T22:28:30.436930Z","end":"2025-12-10T22:28:30.537153Z","steps":["trace[1990854520] 'read index received'  (duration: 100.217713ms)","trace[1990854520] 'applied index is now lower than readState.Index'  (duration: 4.475µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T22:28:30.537314Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.348556ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T22:28:30.537335Z","caller":"traceutil/trace.go:172","msg":"trace[1130245531] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1172; }","duration":"100.405173ms","start":"2025-12-10T22:28:30.436924Z","end":"2025-12-10T22:28:30.537329Z","steps":["trace[1130245531] 'agreement among raft nodes before linearized reading'  (duration: 100.324017ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T22:28:30.729397Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"191.811976ms","expected-duration":"100ms","prefix":"","request":"header:<ID:416471959417307128 > lease_revoke:<id:05c79b0a5ffabeb3>","response":"size:28"}
	{"level":"info","ts":"2025-12-10T22:28:30.730840Z","caller":"traceutil/trace.go:172","msg":"trace[597849462] linearizableReadLoop","detail":"{readStateIndex:1203; appliedIndex:1202; }","duration":"121.91333ms","start":"2025-12-10T22:28:30.608913Z","end":"2025-12-10T22:28:30.730826Z","steps":["trace[597849462] 'read index received'  (duration: 26.151µs)","trace[597849462] 'applied index is now lower than readState.Index'  (duration: 121.885785ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T22:28:30.730948Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.023959ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" limit:1 ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2025-12-10T22:28:30.730966Z","caller":"traceutil/trace.go:172","msg":"trace[1412954282] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1172; }","duration":"122.051324ms","start":"2025-12-10T22:28:30.608909Z","end":"2025-12-10T22:28:30.730961Z","steps":["trace[1412954282] 'agreement among raft nodes before linearized reading'  (duration: 121.958021ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T22:29:04.550990Z","caller":"traceutil/trace.go:172","msg":"trace[45574879] linearizableReadLoop","detail":"{readStateIndex:1398; appliedIndex:1398; }","duration":"193.90097ms","start":"2025-12-10T22:29:04.357071Z","end":"2025-12-10T22:29:04.550972Z","steps":["trace[45574879] 'read index received'  (duration: 193.876073ms)","trace[45574879] 'applied index is now lower than readState.Index'  (duration: 24.221µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T22:29:04.551187Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"194.070927ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T22:29:04.551229Z","caller":"traceutil/trace.go:172","msg":"trace[1529683260] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1360; }","duration":"194.159357ms","start":"2025-12-10T22:29:04.357062Z","end":"2025-12-10T22:29:04.551221Z","steps":["trace[1529683260] 'agreement among raft nodes before linearized reading'  (duration: 194.043971ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T22:29:04.551574Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"187.531367ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T22:29:04.551625Z","caller":"traceutil/trace.go:172","msg":"trace[1081372739] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1360; }","duration":"187.586994ms","start":"2025-12-10T22:29:04.364031Z","end":"2025-12-10T22:29:04.551618Z","steps":["trace[1081372739] 'agreement among raft nodes before linearized reading'  (duration: 187.517888ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T22:29:04.551987Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.373645ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T22:29:04.552027Z","caller":"traceutil/trace.go:172","msg":"trace[1205609276] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1360; }","duration":"114.416402ms","start":"2025-12-10T22:29:04.437604Z","end":"2025-12-10T22:29:04.552021Z","steps":["trace[1205609276] 'agreement among raft nodes before linearized reading'  (duration: 114.354435ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T22:29:04.554575Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"190.188545ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T22:29:04.554619Z","caller":"traceutil/trace.go:172","msg":"trace[2123995129] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1360; }","duration":"190.235872ms","start":"2025-12-10T22:29:04.364376Z","end":"2025-12-10T22:29:04.554612Z","steps":["trace[2123995129] 'agreement among raft nodes before linearized reading'  (duration: 190.164834ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T22:29:27.999643Z","caller":"traceutil/trace.go:172","msg":"trace[2094646558] transaction","detail":"{read_only:false; response_revision:1633; number_of_response:1; }","duration":"121.251986ms","start":"2025-12-10T22:29:27.878378Z","end":"2025-12-10T22:29:27.999630Z","steps":["trace[2094646558] 'process raft request'  (duration: 121.163244ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:31:46 up 5 min,  0 users,  load average: 0.66, 1.53, 0.80
	Linux addons-462156 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [802193904fcda7303eff4ed00463c2bcbb98e5b541274554f07799d55e0a38f0] <==
	E1210 22:28:05.132620       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.162.166:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.162.166:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.162.166:443: connect: connection refused" logger="UnhandledError"
	E1210 22:28:05.153888       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.162.166:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.162.166:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.162.166:443: connect: connection refused" logger="UnhandledError"
	I1210 22:28:05.261517       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1210 22:28:49.784508       1 conn.go:339] Error on socket receive: read tcp 192.168.39.89:8443->192.168.39.1:50570: use of closed network connection
	E1210 22:28:49.972528       1 conn.go:339] Error on socket receive: read tcp 192.168.39.89:8443->192.168.39.1:50594: use of closed network connection
	I1210 22:28:59.132895       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.153.104"}
	I1210 22:29:06.162169       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1210 22:29:17.761093       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1210 22:29:17.997415       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.194.0"}
	I1210 22:29:30.064040       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1210 22:29:36.530795       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1210 22:29:56.208593       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 22:29:56.208723       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1210 22:29:56.243987       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 22:29:56.244129       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1210 22:29:56.249426       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 22:29:56.249472       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1210 22:29:56.267652       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 22:29:56.267682       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1210 22:29:56.286011       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 22:29:56.286080       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1210 22:29:57.250239       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1210 22:29:57.286691       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1210 22:29:57.337317       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1210 22:31:44.734841       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.94.71"}
	
	
	==> kube-controller-manager [3ae57e603fedc97c01344e449201c4da3c80dcedc1fa80b5dd388d358edf71cf] <==
	E1210 22:30:05.873463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1210 22:30:10.438956       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1210 22:30:10.439061       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 22:30:10.496604       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1210 22:30:10.496674       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1210 22:30:12.338591       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 22:30:12.339975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 22:30:14.080188       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 22:30:14.081495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 22:30:14.988961       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 22:30:14.989956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 22:30:30.433682       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 22:30:30.435010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 22:30:31.484933       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 22:30:31.486040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 22:30:37.811499       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 22:30:37.812670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 22:31:06.641958       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 22:31:06.643236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 22:31:11.128949       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 22:31:11.130871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 22:31:26.952280       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 22:31:26.953340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 22:31:40.540134       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 22:31:40.541175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [87b613868b0c2037100d4240def2a469aa4753caac82d15afc692242da9ed19a] <==
	I1210 22:27:12.481921       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 22:27:12.584872       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 22:27:12.584911       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.89"]
	E1210 22:27:12.585017       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 22:27:12.808504       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1210 22:27:12.808556       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 22:27:12.808581       1 server_linux.go:132] "Using iptables Proxier"
	I1210 22:27:12.827642       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 22:27:12.829050       1 server.go:527] "Version info" version="v1.34.2"
	I1210 22:27:12.829078       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 22:27:12.855032       1 config.go:200] "Starting service config controller"
	I1210 22:27:12.855045       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 22:27:12.855087       1 config.go:106] "Starting endpoint slice config controller"
	I1210 22:27:12.855093       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 22:27:12.855120       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 22:27:12.855123       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 22:27:12.859595       1 config.go:309] "Starting node config controller"
	I1210 22:27:12.859610       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 22:27:12.859617       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 22:27:12.956742       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 22:27:12.956807       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 22:27:12.956861       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5e44079bd7e7a9d94b59112a7952c6c96c3ce5d9d069e8bc423adb650780aa03] <==
	E1210 22:27:03.308201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 22:27:03.308246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 22:27:03.308293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 22:27:03.308348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 22:27:03.308395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 22:27:03.308429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 22:27:03.308475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 22:27:03.308539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 22:27:03.309871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 22:27:04.113393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 22:27:04.162218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 22:27:04.213871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 22:27:04.238652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 22:27:04.305859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 22:27:04.311330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 22:27:04.339576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 22:27:04.376973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 22:27:04.418431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 22:27:04.419910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 22:27:04.469696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 22:27:04.509964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1210 22:27:04.602644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 22:27:04.646729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 22:27:04.715434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1210 22:27:06.794132       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 22:30:06 addons-462156 kubelet[1515]: E1210 22:30:06.502391    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765405806501943801 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 10 22:30:06 addons-462156 kubelet[1515]: E1210 22:30:06.502436    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765405806501943801 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 10 22:30:07 addons-462156 kubelet[1515]: I1210 22:30:07.455651    1515 scope.go:117] "RemoveContainer" containerID="0b5cbca6e062454211820a5be3050e40be3e2a32b3fb778286b28674f90e1a45"
	Dec 10 22:30:07 addons-462156 kubelet[1515]: I1210 22:30:07.569267    1515 scope.go:117] "RemoveContainer" containerID="c13155ba5d4acd98556ddf7a366f059622b46ad214e9c58836ffb9c756df6c34"
	Dec 10 22:30:16 addons-462156 kubelet[1515]: E1210 22:30:16.505118    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765405816504748727 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 10 22:30:16 addons-462156 kubelet[1515]: E1210 22:30:16.505159    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765405816504748727 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 10 22:30:26 addons-462156 kubelet[1515]: E1210 22:30:26.507554    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765405826506987386 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 10 22:30:26 addons-462156 kubelet[1515]: E1210 22:30:26.507584    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765405826506987386 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 10 22:30:36 addons-462156 kubelet[1515]: E1210 22:30:36.511408    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765405836511009400 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 10 22:30:36 addons-462156 kubelet[1515]: E1210 22:30:36.511437    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765405836511009400 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 10 22:30:46 addons-462156 kubelet[1515]: E1210 22:30:46.514308    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765405846513803711 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 10 22:30:46 addons-462156 kubelet[1515]: E1210 22:30:46.514356    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765405846513803711 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 10 22:30:56 addons-462156 kubelet[1515]: E1210 22:30:56.516447    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765405856515995824 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 10 22:30:56 addons-462156 kubelet[1515]: E1210 22:30:56.516820    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765405856515995824 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 10 22:31:06 addons-462156 kubelet[1515]: E1210 22:31:06.519657    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765405866519273449 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 10 22:31:06 addons-462156 kubelet[1515]: E1210 22:31:06.519690    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765405866519273449 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 10 22:31:16 addons-462156 kubelet[1515]: I1210 22:31:16.297097    1515 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 22:31:16 addons-462156 kubelet[1515]: E1210 22:31:16.521960    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765405876521593259 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 10 22:31:16 addons-462156 kubelet[1515]: E1210 22:31:16.522117    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765405876521593259 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 10 22:31:26 addons-462156 kubelet[1515]: E1210 22:31:26.524941    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765405886524499644 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 10 22:31:26 addons-462156 kubelet[1515]: E1210 22:31:26.524963    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765405886524499644 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 10 22:31:34 addons-462156 kubelet[1515]: I1210 22:31:34.297339    1515 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-t84vv" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 22:31:36 addons-462156 kubelet[1515]: E1210 22:31:36.527396    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765405896527091285 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 10 22:31:36 addons-462156 kubelet[1515]: E1210 22:31:36.527416    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765405896527091285 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 10 22:31:44 addons-462156 kubelet[1515]: I1210 22:31:44.766183    1515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz2dc\" (UniqueName: \"kubernetes.io/projected/326288b8-fceb-4c4a-8017-d11281559671-kube-api-access-rz2dc\") pod \"hello-world-app-5d498dc89-p9mq7\" (UID: \"326288b8-fceb-4c4a-8017-d11281559671\") " pod="default/hello-world-app-5d498dc89-p9mq7"
	
	
	==> storage-provisioner [ed7661ceb3c5bda624d44999ef4385d224a490e97b78acc1a182006bd21c959b] <==
	W1210 22:31:20.600620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:31:22.604318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:31:22.609534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:31:24.612880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:31:24.620459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:31:26.624649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:31:26.629528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:31:28.632795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:31:28.637889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:31:30.641953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:31:30.647330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:31:32.651439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:31:32.656503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:31:34.661308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:31:34.666357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:31:36.669922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:31:36.683980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:31:38.687520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:31:38.693261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:31:40.696884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:31:40.705527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:31:42.708694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:31:42.713896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:31:44.723211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:31:44.752047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-462156 -n addons-462156
helpers_test.go:270: (dbg) Run:  kubectl --context addons-462156 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-p9mq7 ingress-nginx-admission-create-w5dlb ingress-nginx-admission-patch-w8rwc
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-462156 describe pod hello-world-app-5d498dc89-p9mq7 ingress-nginx-admission-create-w5dlb ingress-nginx-admission-patch-w8rwc
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-462156 describe pod hello-world-app-5d498dc89-p9mq7 ingress-nginx-admission-create-w5dlb ingress-nginx-admission-patch-w8rwc: exit status 1 (72.924746ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-p9mq7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-462156/192.168.39.89
	Start Time:       Wed, 10 Dec 2025 22:31:44 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rz2dc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rz2dc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-p9mq7 to addons-462156
	  Normal  Pulling    1s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-w5dlb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-w8rwc" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-462156 describe pod hello-world-app-5d498dc89-p9mq7 ingress-nginx-admission-create-w5dlb ingress-nginx-admission-patch-w8rwc: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-462156 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-462156 addons disable ingress-dns --alsologtostderr -v=1: (1.228029785s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-462156 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-462156 addons disable ingress --alsologtostderr -v=1: (7.751975027s)
--- FAIL: TestAddons/parallel/Ingress (158.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (302.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-497660 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-497660 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-497660 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-497660 --alsologtostderr -v=1] stderr:
I1210 22:41:52.446863   18216 out.go:360] Setting OutFile to fd 1 ...
I1210 22:41:52.447105   18216 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:41:52.447113   18216 out.go:374] Setting ErrFile to fd 2...
I1210 22:41:52.447117   18216 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:41:52.447311   18216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
I1210 22:41:52.447578   18216 mustload.go:66] Loading cluster: functional-497660
I1210 22:41:52.447951   18216 config.go:182] Loaded profile config "functional-497660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 22:41:52.449889   18216 host.go:66] Checking if "functional-497660" exists ...
I1210 22:41:52.450132   18216 api_server.go:166] Checking apiserver status ...
I1210 22:41:52.450179   18216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1210 22:41:52.452793   18216 main.go:143] libmachine: domain functional-497660 has defined MAC address 52:54:00:26:f5:2e in network mk-functional-497660
I1210 22:41:52.453240   18216 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:26:f5:2e", ip: ""} in network mk-functional-497660: {Iface:virbr1 ExpiryTime:2025-12-10 23:38:51 +0000 UTC Type:0 Mac:52:54:00:26:f5:2e Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:functional-497660 Clientid:01:52:54:00:26:f5:2e}
I1210 22:41:52.453279   18216 main.go:143] libmachine: domain functional-497660 has defined IP address 192.168.39.7 and MAC address 52:54:00:26:f5:2e in network mk-functional-497660
I1210 22:41:52.453455   18216 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/functional-497660/id_rsa Username:docker}
I1210 22:41:52.552718   18216 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6423/cgroup
W1210 22:41:52.565255   18216 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6423/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1210 22:41:52.565328   18216 ssh_runner.go:195] Run: ls
I1210 22:41:52.570170   18216 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8441/healthz ...
I1210 22:41:52.575697   18216 api_server.go:279] https://192.168.39.7:8441/healthz returned 200:
ok
W1210 22:41:52.575745   18216 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1210 22:41:52.575896   18216 config.go:182] Loaded profile config "functional-497660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 22:41:52.575911   18216 addons.go:70] Setting dashboard=true in profile "functional-497660"
I1210 22:41:52.575917   18216 addons.go:239] Setting addon dashboard=true in "functional-497660"
I1210 22:41:52.575939   18216 host.go:66] Checking if "functional-497660" exists ...
I1210 22:41:52.579483   18216 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1210 22:41:52.582945   18216 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1210 22:41:52.584022   18216 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1210 22:41:52.584036   18216 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1210 22:41:52.586229   18216 main.go:143] libmachine: domain functional-497660 has defined MAC address 52:54:00:26:f5:2e in network mk-functional-497660
I1210 22:41:52.586586   18216 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:26:f5:2e", ip: ""} in network mk-functional-497660: {Iface:virbr1 ExpiryTime:2025-12-10 23:38:51 +0000 UTC Type:0 Mac:52:54:00:26:f5:2e Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:functional-497660 Clientid:01:52:54:00:26:f5:2e}
I1210 22:41:52.586608   18216 main.go:143] libmachine: domain functional-497660 has defined IP address 192.168.39.7 and MAC address 52:54:00:26:f5:2e in network mk-functional-497660
I1210 22:41:52.586731   18216 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/functional-497660/id_rsa Username:docker}
I1210 22:41:52.688563   18216 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1210 22:41:52.688588   18216 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1210 22:41:52.710682   18216 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1210 22:41:52.710706   18216 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1210 22:41:52.731371   18216 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1210 22:41:52.731391   18216 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1210 22:41:52.755360   18216 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1210 22:41:52.755380   18216 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1210 22:41:52.777830   18216 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1210 22:41:52.777861   18216 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1210 22:41:52.798372   18216 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1210 22:41:52.798395   18216 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1210 22:41:52.823262   18216 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1210 22:41:52.823285   18216 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1210 22:41:52.844467   18216 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1210 22:41:52.844488   18216 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1210 22:41:52.866984   18216 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1210 22:41:52.867006   18216 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1210 22:41:52.888410   18216 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1210 22:41:53.587761   18216 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-497660 addons enable metrics-server

                                                
                                                
I1210 22:41:53.589279   18216 addons.go:202] Writing out "functional-497660" config to set dashboard=true...
W1210 22:41:53.589557   18216 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1210 22:41:53.590137   18216 kapi.go:59] client config for functional-497660: &rest.Config{Host:"https://192.168.39.7:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt", KeyFile:"/home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.key", CAFile:"/home/jenkins/minikube-integration/22061-5125/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1210 22:41:53.590554   18216 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1210 22:41:53.590574   18216 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1210 22:41:53.590582   18216 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1210 22:41:53.590588   18216 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1210 22:41:53.590593   18216 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1210 22:41:53.599469   18216 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  b5e2b00e-9647-490f-8100-4c019bd7a9f2 833 0 2025-12-10 22:41:53 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-10 22:41:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.106.203.29,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.106.203.29],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1210 22:41:53.599596   18216 out.go:285] * Launching proxy ...
* Launching proxy ...
I1210 22:41:53.599656   18216 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-497660 proxy --port 36195]
I1210 22:41:53.600032   18216 dashboard.go:159] Waiting for kubectl to output host:port ...
I1210 22:41:53.643793   18216 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1210 22:41:53.643827   18216 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1210 22:41:53.653302   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1ae27996-973b-46d0-9008-693b23e2c2f7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:53 GMT]] Body:0xc00090b940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000364f00 TLS:<nil>}
I1210 22:41:53.653387   18216 retry.go:31] will retry after 74.683µs: Temporary Error: unexpected response code: 503
I1210 22:41:53.657513   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cd5e44e1-e2b8-44f7-8de4-cc4adefa2a41] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:53 GMT]] Body:0xc00049d880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000365040 TLS:<nil>}
I1210 22:41:53.657573   18216 retry.go:31] will retry after 112.19µs: Temporary Error: unexpected response code: 503
I1210 22:41:53.661196   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9945dd18-7f10-4d0c-9a19-60c6cc1f1819] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:53 GMT]] Body:0xc00090ba40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208a00 TLS:<nil>}
I1210 22:41:53.661246   18216 retry.go:31] will retry after 148.831µs: Temporary Error: unexpected response code: 503
I1210 22:41:53.664955   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4fa82c27-27b8-4e9f-b505-c15c05799f4e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:53 GMT]] Body:0xc00090bcc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000365180 TLS:<nil>}
I1210 22:41:53.664999   18216 retry.go:31] will retry after 387.104µs: Temporary Error: unexpected response code: 503
I1210 22:41:53.668537   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[be996684-091a-4541-953e-aeda87699c65] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:53 GMT]] Body:0xc00049d9c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003652c0 TLS:<nil>}
I1210 22:41:53.668594   18216 retry.go:31] will retry after 326.828µs: Temporary Error: unexpected response code: 503
I1210 22:41:53.672401   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3de6f6dd-cc70-4b9e-a891-97a7c211583c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:53 GMT]] Body:0xc0016905c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208c80 TLS:<nil>}
I1210 22:41:53.672466   18216 retry.go:31] will retry after 594.737µs: Temporary Error: unexpected response code: 503
I1210 22:41:53.676537   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2e1a7b58-d1b5-4983-a888-82482f0f0d41] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:53 GMT]] Body:0xc00049dac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000328a00 TLS:<nil>}
I1210 22:41:53.676587   18216 retry.go:31] will retry after 754.903µs: Temporary Error: unexpected response code: 503
I1210 22:41:53.683406   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0d09670e-7ed0-4b9f-9d12-59813a1f86e1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:53 GMT]] Body:0xc00090be40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208f00 TLS:<nil>}
I1210 22:41:53.683495   18216 retry.go:31] will retry after 880.058µs: Temporary Error: unexpected response code: 503
I1210 22:41:53.689084   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[613773a2-6b56-4e25-8278-9dea3c908645] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:53 GMT]] Body:0xc0016906c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000365540 TLS:<nil>}
I1210 22:41:53.689139   18216 retry.go:31] will retry after 2.475531ms: Temporary Error: unexpected response code: 503
I1210 22:41:53.696788   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e50f807c-24ca-4733-93b6-30209bfe7d2f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:53 GMT]] Body:0xc001690780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000328b40 TLS:<nil>}
I1210 22:41:53.696864   18216 retry.go:31] will retry after 5.544916ms: Temporary Error: unexpected response code: 503
I1210 22:41:53.705933   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4fc80a89-496c-4d6f-8375-9dd07782e316] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:53 GMT]] Body:0xc00049dbc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000328c80 TLS:<nil>}
I1210 22:41:53.705998   18216 retry.go:31] will retry after 6.054581ms: Temporary Error: unexpected response code: 503
I1210 22:41:53.715776   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a04fe476-c463-4ebe-9714-60bf9c3f077a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:53 GMT]] Body:0xc001690880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209040 TLS:<nil>}
I1210 22:41:53.715841   18216 retry.go:31] will retry after 5.701094ms: Temporary Error: unexpected response code: 503
I1210 22:41:53.725804   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bf2950ca-2d05-46fc-9ee9-c6939aa6a129] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:53 GMT]] Body:0xc00170a080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000328dc0 TLS:<nil>}
I1210 22:41:53.725899   18216 retry.go:31] will retry after 14.533581ms: Temporary Error: unexpected response code: 503
I1210 22:41:53.744645   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d9adbdaf-af24-4301-87a0-3a30ab553ac6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:53 GMT]] Body:0xc001690980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000365680 TLS:<nil>}
I1210 22:41:53.744698   18216 retry.go:31] will retry after 18.567254ms: Temporary Error: unexpected response code: 503
I1210 22:41:53.767275   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a65c0695-1740-40da-884b-5071815b4673] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:53 GMT]] Body:0xc00170a140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000329040 TLS:<nil>}
I1210 22:41:53.767340   18216 retry.go:31] will retry after 32.280971ms: Temporary Error: unexpected response code: 503
I1210 22:41:53.807388   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3b92a029-7fae-420d-9495-e38ba867e72f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:53 GMT]] Body:0xc001690a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003657c0 TLS:<nil>}
I1210 22:41:53.807466   18216 retry.go:31] will retry after 26.887217ms: Temporary Error: unexpected response code: 503
I1210 22:41:53.844985   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[48bd6478-29d5-47d2-926b-39b81d18086d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:53 GMT]] Body:0xc00049dd40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000329180 TLS:<nil>}
I1210 22:41:53.845039   18216 retry.go:31] will retry after 85.907883ms: Temporary Error: unexpected response code: 503
I1210 22:41:53.935228   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6f966b63-a402-4b1c-b884-209251b9bbc2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:53 GMT]] Body:0xc001690b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209180 TLS:<nil>}
I1210 22:41:53.935322   18216 retry.go:31] will retry after 107.20843ms: Temporary Error: unexpected response code: 503
I1210 22:41:54.045993   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ba0d6fe2-b312-4e4f-9cc5-64b3e1e0ad10] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:54 GMT]] Body:0xc00049de40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003292c0 TLS:<nil>}
I1210 22:41:54.046289   18216 retry.go:31] will retry after 90.211766ms: Temporary Error: unexpected response code: 503
I1210 22:41:54.140918   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[71b19013-78ac-4bf6-8fa3-b124ccdec9e3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:54 GMT]] Body:0xc001690c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002092c0 TLS:<nil>}
I1210 22:41:54.140980   18216 retry.go:31] will retry after 163.079713ms: Temporary Error: unexpected response code: 503
I1210 22:41:54.307513   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[305f8d37-5c2b-4c58-8b86-7579da128ff8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:54 GMT]] Body:0xc00049df40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000329400 TLS:<nil>}
I1210 22:41:54.307565   18216 retry.go:31] will retry after 254.978074ms: Temporary Error: unexpected response code: 503
I1210 22:41:54.566240   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b8d82484-2a41-4782-ae38-286e87674b78] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:54 GMT]] Body:0xc00170a240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209400 TLS:<nil>}
I1210 22:41:54.566315   18216 retry.go:31] will retry after 561.681659ms: Temporary Error: unexpected response code: 503
I1210 22:41:55.132065   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b8e63c47-cebc-4299-a358-443931d9a426] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:55 GMT]] Body:0xc001690d40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000365cc0 TLS:<nil>}
I1210 22:41:55.132135   18216 retry.go:31] will retry after 759.946764ms: Temporary Error: unexpected response code: 503
I1210 22:41:55.896152   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0c2a2565-c2a3-4dcc-8149-f4d2291e349f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:55 GMT]] Body:0xc0018040c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000329540 TLS:<nil>}
I1210 22:41:55.896215   18216 retry.go:31] will retry after 1.233403992s: Temporary Error: unexpected response code: 503
I1210 22:41:57.134180   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b292bb12-6081-4ada-9d91-53966e58881b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:57 GMT]] Body:0xc000558a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209540 TLS:<nil>}
I1210 22:41:57.134246   18216 retry.go:31] will retry after 2.009523649s: Temporary Error: unexpected response code: 503
I1210 22:41:59.148411   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a8b9bbb2-6704-4745-aa8d-b6d628334885] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:41:59 GMT]] Body:0xc000558b00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000434000 TLS:<nil>}
I1210 22:41:59.148490   18216 retry.go:31] will retry after 3.748017196s: Temporary Error: unexpected response code: 503
I1210 22:42:02.902260   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f7b00d81-930a-4b3e-90c0-12b250a978b7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:42:02 GMT]] Body:0xc00170a380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209680 TLS:<nil>}
I1210 22:42:02.902357   18216 retry.go:31] will retry after 4.757744689s: Temporary Error: unexpected response code: 503
I1210 22:42:07.664327   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5cd4c30f-141a-475d-92f7-8e30031f7f8d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:42:07 GMT]] Body:0xc001804240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000365e00 TLS:<nil>}
I1210 22:42:07.664410   18216 retry.go:31] will retry after 3.318787156s: Temporary Error: unexpected response code: 503
I1210 22:42:10.990372   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[de15f1a3-f03a-4dc6-8377-8a468853dc9f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:42:10 GMT]] Body:0xc000558b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209900 TLS:<nil>}
I1210 22:42:10.990452   18216 retry.go:31] will retry after 4.849395157s: Temporary Error: unexpected response code: 503
I1210 22:42:15.850160   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0e227762-cbe7-4b99-99cd-00d7587585ae] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:42:15 GMT]] Body:0xc00170a480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209a40 TLS:<nil>}
I1210 22:42:15.850217   18216 retry.go:31] will retry after 8.926397818s: Temporary Error: unexpected response code: 503
I1210 22:42:25.314995   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[02412246-a9a9-4da8-95a4-01939a61e473] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:42:25 GMT]] Body:0xc0018043c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c1680 TLS:<nil>}
I1210 22:42:25.315052   18216 retry.go:31] will retry after 27.412317833s: Temporary Error: unexpected response code: 503
I1210 22:42:52.731715   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f58f2999-843c-4f2b-8f70-9ce0207dd7a2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:42:52 GMT]] Body:0xc00170a580 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209b80 TLS:<nil>}
I1210 22:42:52.731785   18216 retry.go:31] will retry after 29.128805769s: Temporary Error: unexpected response code: 503
I1210 22:43:21.868507   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ba88a7dc-a7af-4594-87a1-549b06956337] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:43:21 GMT]] Body:0xc00170a6c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c17c0 TLS:<nil>}
I1210 22:43:21.868570   18216 retry.go:31] will retry after 41.79974164s: Temporary Error: unexpected response code: 503
I1210 22:44:03.673042   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3e42652f-b0fa-4f0d-a90d-56c365ebb2d8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:44:03 GMT]] Body:0xc001804080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c1900 TLS:<nil>}
I1210 22:44:03.673118   18216 retry.go:31] will retry after 1m28.856200577s: Temporary Error: unexpected response code: 503
I1210 22:45:32.533903   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f41a7e82-99a5-45e0-ba15-2e39a6af1baf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:45:32 GMT]] Body:0xc00170a100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002083c0 TLS:<nil>}
I1210 22:45:32.533984   18216 retry.go:31] will retry after 37.495469578s: Temporary Error: unexpected response code: 503
I1210 22:46:10.032963   18216 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4c91333d-9b60-4ddf-8d84-d172e8dccc6b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 22:46:10 GMT]] Body:0xc001804080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c1a40 TLS:<nil>}
I1210 22:46:10.033022   18216 retry.go:31] will retry after 1m13.249642385s: Temporary Error: unexpected response code: 503
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-497660 -n functional-497660
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-497660 logs -n 25: (1.335453305s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-497660 ssh stat /mount-9p/created-by-pod                                                                                                 │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │ 10 Dec 25 22:42 UTC │
	│ ssh            │ functional-497660 ssh sudo umount -f /mount-9p                                                                                                      │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │ 10 Dec 25 22:42 UTC │
	│ mount          │ -p functional-497660 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2446531598/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │                     │
	│ ssh            │ functional-497660 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │                     │
	│ update-context │ functional-497660 update-context --alsologtostderr -v=2                                                                                             │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │ 10 Dec 25 22:42 UTC │
	│ update-context │ functional-497660 update-context --alsologtostderr -v=2                                                                                             │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │ 10 Dec 25 22:42 UTC │
	│ update-context │ functional-497660 update-context --alsologtostderr -v=2                                                                                             │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │ 10 Dec 25 22:42 UTC │
	│ image          │ functional-497660 image ls --format short --alsologtostderr                                                                                         │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │ 10 Dec 25 22:42 UTC │
	│ image          │ functional-497660 image ls --format yaml --alsologtostderr                                                                                          │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │ 10 Dec 25 22:42 UTC │
	│ ssh            │ functional-497660 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │ 10 Dec 25 22:42 UTC │
	│ ssh            │ functional-497660 ssh pgrep buildkitd                                                                                                               │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │                     │
	│ ssh            │ functional-497660 ssh -- ls -la /mount-9p                                                                                                           │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │ 10 Dec 25 22:42 UTC │
	│ image          │ functional-497660 image build -t localhost/my-image:functional-497660 testdata/build --alsologtostderr                                              │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │ 10 Dec 25 22:42 UTC │
	│ ssh            │ functional-497660 ssh sudo umount -f /mount-9p                                                                                                      │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │                     │
	│ mount          │ -p functional-497660 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo344073283/001:/mount1 --alsologtostderr -v=1                 │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │                     │
	│ ssh            │ functional-497660 ssh findmnt -T /mount1                                                                                                            │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │                     │
	│ mount          │ -p functional-497660 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo344073283/001:/mount3 --alsologtostderr -v=1                 │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │                     │
	│ mount          │ -p functional-497660 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo344073283/001:/mount2 --alsologtostderr -v=1                 │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │                     │
	│ ssh            │ functional-497660 ssh findmnt -T /mount1                                                                                                            │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │ 10 Dec 25 22:42 UTC │
	│ ssh            │ functional-497660 ssh findmnt -T /mount2                                                                                                            │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │ 10 Dec 25 22:42 UTC │
	│ ssh            │ functional-497660 ssh findmnt -T /mount3                                                                                                            │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │ 10 Dec 25 22:42 UTC │
	│ mount          │ -p functional-497660 --kill=true                                                                                                                    │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │                     │
	│ image          │ functional-497660 image ls --format json --alsologtostderr                                                                                          │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │ 10 Dec 25 22:42 UTC │
	│ image          │ functional-497660 image ls --format table --alsologtostderr                                                                                         │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │ 10 Dec 25 22:42 UTC │
	│ image          │ functional-497660 image ls                                                                                                                          │ functional-497660 │ jenkins │ v1.37.0 │ 10 Dec 25 22:42 UTC │ 10 Dec 25 22:42 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 22:42:16
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 22:42:16.540600   18365 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:42:16.540851   18365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:42:16.540862   18365 out.go:374] Setting ErrFile to fd 2...
	I1210 22:42:16.540867   18365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:42:16.541092   18365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
	I1210 22:42:16.541583   18365 out.go:368] Setting JSON to false
	I1210 22:42:16.542554   18365 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1478,"bootTime":1765405059,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 22:42:16.542607   18365 start.go:143] virtualization: kvm guest
	I1210 22:42:16.544953   18365 out.go:179] * [functional-497660] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 22:42:16.546086   18365 notify.go:221] Checking for updates...
	I1210 22:42:16.546100   18365 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 22:42:16.547673   18365 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 22:42:16.549084   18365 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5125/kubeconfig
	I1210 22:42:16.550397   18365 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5125/.minikube
	I1210 22:42:16.551582   18365 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 22:42:16.552643   18365 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 22:42:16.554370   18365 config.go:182] Loaded profile config "functional-497660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 22:42:16.555022   18365 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 22:42:16.587036   18365 out.go:179] * Using the kvm2 driver based on existing profile
	I1210 22:42:16.588165   18365 start.go:309] selected driver: kvm2
	I1210 22:42:16.588182   18365 start.go:927] validating driver "kvm2" against &{Name:functional-497660 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:
22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-497660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Sch
eduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 22:42:16.588317   18365 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 22:42:16.589274   18365 cni.go:84] Creating CNI manager for ""
	I1210 22:42:16.589342   18365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 22:42:16.589400   18365 start.go:353] cluster config:
	{Name:functional-497660 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-4
97660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 22:42:16.590689   18365 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.134928290Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765406813134906722,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:263272,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cc6decd3-f5d5-437d-b27c-7b916abbc4d6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.136256212Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4bb0e8ac-4697-428f-8b79-635ad4c4a613 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.136352659Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4bb0e8ac-4697-428f-8b79-635ad4c4a613 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.137933098Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ef6fe52bb1e13ed725e35c35c597d4a20e74391e71d4b5bfc4155f3b0591215,PodSandboxId:a757a0edfda00dedc96c26a0e4e21b2c55e3a97fbf245201f8ac9dbfd8549245,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1765406537440966635,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5565989548-b9lzz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 371417b7-73ac-4356-be47-7a4351adb918,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2c0dd3c65d44cb4c2dd60a6a98fe18cb804be77ea7ac7801fdc9ae725f0e7fe,PodSandboxId:cd00e58e1b09a2370f7d5c6144c23804cf9cf0d33f16cc69d46fb498f8d9607b,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765406533601657704,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid
: 4d50e361-2d6d-4adc-87c5-9e5bb49dc05f,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bedfdeec08862e902288f7f6777156f97bb6498659cdee0c3d48c3685502380,PodSandboxId:2502bb54f034601ab4bd5ddb931e87c7921e1f4ab7a9a928690b90a25dfbda82,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765406530497018513,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef6f0c10-336b-4f80-ae8e-4fd51f3dc27a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3651e331a1020c43394c0e8efd14697dfff1c5332d8c7b5b4e4729e185c54f,PodSandboxId:d0d7881454c480d1573f0c3bc7cd175d23e49b2440507f0eebbe7aa790542ff0,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765406530351865709,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-8bc26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f796a587-8be2-4454-b9d4-117d209d6c8e,},Anno
tations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dffea6180547c0fb7f39fd03567de4b9c8f9148c25a414e018a404890368d041,PodSandboxId:2193a019676b29d4fb3690fbea93553be33f9de9b08721633fad0b037e3e8f00,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765406502575497383,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-5758569b7
9-plpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 040c7ec9-cc60-49f6-b97a-2ce27fb2bc1c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a84aae63de3aa6f357ceed2b1e319599d70306bdcb4de401cb2d1b5fcb06db,PodSandboxId:7f698858614d1b9377e6a1fbc06575d68973223a6c414c79cd3362747a237a23,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765406501671183957,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-n
ode-connect-9f67c86d4-x9n2b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76e6c958-6c1b-46f7-9691-cae03089c3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c2cb861baf241d5e5c89d2c74ad8aedfba652f029960a3dced673b36f97a1f9,PodSandboxId:908ca8dda937718a296fed3351a48ccdc06d8684b5a3e6396ae2ca2ea7c8a1cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765406475775639936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ztjdq,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90ed420-f6d0-41f8-94c4-4becc272220c,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e45254926301e8b4d9471fba3c1890485a1c6edf0d586b2a01416f02167bd3c,PodSandboxId:fdc30e4bab267c4f9612d9407e086a48fe18f8007172acf720d909934423c539,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c
8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765406475433178786,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69df49db-a6bb-4224-a082-ef172c852dbd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dbf9761fcefaf14aade2d64f8f7da09a4f49842e20164598fb179c8618d0b07,PodSandboxId:5e4cf73b8486e1bbc0a5eb4532785cef545c2ef693ab22914d7cad3b9f648d40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69
e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765406475402961181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8m5bc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 359b24c2-173b-4e8d-a9ad-37699e9c182c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3099f65fc6bc4bb91fa578984b4b45981abc192baf334b35fe0f5ed88763cfd3,PodSandboxId:c525625a23ba79ebccdcc863fc975af74c40e47e1afa9e4f1ce7c0b945358b6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765406471547187137,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f158481c41d7c7ea1ac420460ca09ac,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:820b258590812670328513cb9f8bc63c82759dad552b272a481bd9d9e9ac10ed,PodSandboxId:d4ad154c45805c04df36ccc5d05c0474232207180f45f2e8400973a54440d83f,Metadata:&ContainerMeta
data{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765406471513042941,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a02c9ca4916395fdf029605dc73da32a,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90bbbe9736c30df1e90756d3f1c5c79d9e3f1db3b36276f7a813f73f
f569ff99,PodSandboxId:6276f85b3b765f0c0b123bfdd19e5e6d37a17c073b281ece1da9309ca0425692,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765406471467153367,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1492eee028679664d5b8ec63b14d9ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:04b6e5bb548a3f34c705a7b87dd34b092efa466ead6cc2c441d4b0dca726efa9,PodSandboxId:559b3259c4c0f42fe5ad0a37a34f718bd71838eb41eae69e48d075127983ea17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765406471486129520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acc7cf3e9cb809ca70f2d07445a6529a,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d6052c01241c62e2fe069151e8f691b156be57ec93cdcbd320ad795637d9925,PodSandboxId:3ac5e4b41d294f3426726f2572ed9126b6764f43fed91b13a3cb0a0eabc4bc6b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765406428401832865,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a02c9ca4916395fdf029605dc73da32a,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\
":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25fef9fca1e8199164c4f7510156932a11d8a3da35eabf661c97c7b0d8f94635,PodSandboxId:518a69eb2653beeda6d8de5fa55760f90f76b38c4dd30e99b66e402e50fa1e71,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765406428425322358,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1492eee028679664d5b8ec63b14d9ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf0b75ee627a0448e401acf088bfb7b7b02f88484d3bd8dbddea731b2a690692,PodSandboxId:dfd30d7e19f4aee69eea36450219f701942f60a23c37f372c96474493c6ecdcb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765406428385450802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f
158481c41d7c7ea1ac420460ca09ac,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec2dd2ad58f540358cd2eb0b893a7af790ef825e2b75a75b054265763c10a17,PodSandboxId:db9c896533757da59dae8d1c3f6ed532352c91bd38f4c9f4ba8251386c773740,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765406422338822459,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-8m5bc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 359b24c2-173b-4e8d-a9ad-37699e9c182c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bba76e48a4743ed402b870784343a793ebf4474ddba828e18c5bce1904de47a,PodSandboxId:4263736afb8084c8a9bf9f703a715a76554bc2aae72fc1ac586cf2569be4c43c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765406422341749085,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ztjdq,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: a90ed420-f6d0-41f8-94c4-4becc272220c,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e40900607868b08279b704b4e030e3fcc640199b5a009742f55772920b0a7c92,PodSandboxId:e9e5fca8f4a8a784dab181ab2458952a82da7a3096134e7f17d1107bf99b85c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c93
5de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765406420338616450,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69df49db-a6bb-4224-a082-ef172c852dbd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4bb0e8ac-4697-428f-8b79-635ad4c4a613 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.180821468Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5cea838d-ea5e-4571-926f-30c1277de630 name=/runtime.v1.RuntimeService/Version
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.180969828Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5cea838d-ea5e-4571-926f-30c1277de630 name=/runtime.v1.RuntimeService/Version
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.183021517Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=505fb1ee-0579-4023-8ece-1b41ee5a9dd4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.183827207Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765406813183803094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:263272,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=505fb1ee-0579-4023-8ece-1b41ee5a9dd4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.184833255Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c7f6af22-6af1-43be-9e44-cbe133e2b3b3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.184900408Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c7f6af22-6af1-43be-9e44-cbe133e2b3b3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.185292915Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ef6fe52bb1e13ed725e35c35c597d4a20e74391e71d4b5bfc4155f3b0591215,PodSandboxId:a757a0edfda00dedc96c26a0e4e21b2c55e3a97fbf245201f8ac9dbfd8549245,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1765406537440966635,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5565989548-b9lzz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 371417b7-73ac-4356-be47-7a4351adb918,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2c0dd3c65d44cb4c2dd60a6a98fe18cb804be77ea7ac7801fdc9ae725f0e7fe,PodSandboxId:cd00e58e1b09a2370f7d5c6144c23804cf9cf0d33f16cc69d46fb498f8d9607b,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765406533601657704,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid
: 4d50e361-2d6d-4adc-87c5-9e5bb49dc05f,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bedfdeec08862e902288f7f6777156f97bb6498659cdee0c3d48c3685502380,PodSandboxId:2502bb54f034601ab4bd5ddb931e87c7921e1f4ab7a9a928690b90a25dfbda82,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765406530497018513,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef6f0c10-336b-4f80-ae8e-4fd51f3dc27a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3651e331a1020c43394c0e8efd14697dfff1c5332d8c7b5b4e4729e185c54f,PodSandboxId:d0d7881454c480d1573f0c3bc7cd175d23e49b2440507f0eebbe7aa790542ff0,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765406530351865709,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-8bc26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f796a587-8be2-4454-b9d4-117d209d6c8e,},Anno
tations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dffea6180547c0fb7f39fd03567de4b9c8f9148c25a414e018a404890368d041,PodSandboxId:2193a019676b29d4fb3690fbea93553be33f9de9b08721633fad0b037e3e8f00,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765406502575497383,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-5758569b7
9-plpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 040c7ec9-cc60-49f6-b97a-2ce27fb2bc1c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a84aae63de3aa6f357ceed2b1e319599d70306bdcb4de401cb2d1b5fcb06db,PodSandboxId:7f698858614d1b9377e6a1fbc06575d68973223a6c414c79cd3362747a237a23,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765406501671183957,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-n
ode-connect-9f67c86d4-x9n2b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76e6c958-6c1b-46f7-9691-cae03089c3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c2cb861baf241d5e5c89d2c74ad8aedfba652f029960a3dced673b36f97a1f9,PodSandboxId:908ca8dda937718a296fed3351a48ccdc06d8684b5a3e6396ae2ca2ea7c8a1cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765406475775639936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ztjdq,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90ed420-f6d0-41f8-94c4-4becc272220c,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e45254926301e8b4d9471fba3c1890485a1c6edf0d586b2a01416f02167bd3c,PodSandboxId:fdc30e4bab267c4f9612d9407e086a48fe18f8007172acf720d909934423c539,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c
8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765406475433178786,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69df49db-a6bb-4224-a082-ef172c852dbd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dbf9761fcefaf14aade2d64f8f7da09a4f49842e20164598fb179c8618d0b07,PodSandboxId:5e4cf73b8486e1bbc0a5eb4532785cef545c2ef693ab22914d7cad3b9f648d40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69
e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765406475402961181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8m5bc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 359b24c2-173b-4e8d-a9ad-37699e9c182c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3099f65fc6bc4bb91fa578984b4b45981abc192baf334b35fe0f5ed88763cfd3,PodSandboxId:c525625a23ba79ebccdcc863fc975af74c40e47e1afa9e4f1ce7c0b945358b6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765406471547187137,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f158481c41d7c7ea1ac420460ca09ac,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:820b258590812670328513cb9f8bc63c82759dad552b272a481bd9d9e9ac10ed,PodSandboxId:d4ad154c45805c04df36ccc5d05c0474232207180f45f2e8400973a54440d83f,Metadata:&ContainerMeta
data{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765406471513042941,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a02c9ca4916395fdf029605dc73da32a,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90bbbe9736c30df1e90756d3f1c5c79d9e3f1db3b36276f7a813f73f
f569ff99,PodSandboxId:6276f85b3b765f0c0b123bfdd19e5e6d37a17c073b281ece1da9309ca0425692,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765406471467153367,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1492eee028679664d5b8ec63b14d9ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:04b6e5bb548a3f34c705a7b87dd34b092efa466ead6cc2c441d4b0dca726efa9,PodSandboxId:559b3259c4c0f42fe5ad0a37a34f718bd71838eb41eae69e48d075127983ea17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765406471486129520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acc7cf3e9cb809ca70f2d07445a6529a,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d6052c01241c62e2fe069151e8f691b156be57ec93cdcbd320ad795637d9925,PodSandboxId:3ac5e4b41d294f3426726f2572ed9126b6764f43fed91b13a3cb0a0eabc4bc6b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765406428401832865,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a02c9ca4916395fdf029605dc73da32a,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\
":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25fef9fca1e8199164c4f7510156932a11d8a3da35eabf661c97c7b0d8f94635,PodSandboxId:518a69eb2653beeda6d8de5fa55760f90f76b38c4dd30e99b66e402e50fa1e71,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765406428425322358,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1492eee028679664d5b8ec63b14d9ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf0b75ee627a0448e401acf088bfb7b7b02f88484d3bd8dbddea731b2a690692,PodSandboxId:dfd30d7e19f4aee69eea36450219f701942f60a23c37f372c96474493c6ecdcb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765406428385450802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f
158481c41d7c7ea1ac420460ca09ac,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec2dd2ad58f540358cd2eb0b893a7af790ef825e2b75a75b054265763c10a17,PodSandboxId:db9c896533757da59dae8d1c3f6ed532352c91bd38f4c9f4ba8251386c773740,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765406422338822459,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-8m5bc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 359b24c2-173b-4e8d-a9ad-37699e9c182c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bba76e48a4743ed402b870784343a793ebf4474ddba828e18c5bce1904de47a,PodSandboxId:4263736afb8084c8a9bf9f703a715a76554bc2aae72fc1ac586cf2569be4c43c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765406422341749085,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ztjdq,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: a90ed420-f6d0-41f8-94c4-4becc272220c,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e40900607868b08279b704b4e030e3fcc640199b5a009742f55772920b0a7c92,PodSandboxId:e9e5fca8f4a8a784dab181ab2458952a82da7a3096134e7f17d1107bf99b85c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c93
5de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765406420338616450,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69df49db-a6bb-4224-a082-ef172c852dbd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c7f6af22-6af1-43be-9e44-cbe133e2b3b3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.215116160Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e730ad7e-c4ee-4bc1-802a-aee83e7ce66c name=/runtime.v1.RuntimeService/Version
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.215238047Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e730ad7e-c4ee-4bc1-802a-aee83e7ce66c name=/runtime.v1.RuntimeService/Version
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.216505319Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dddce327-5666-4201-86b0-e71eef85ae25 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.217220912Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765406813217197528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:263272,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dddce327-5666-4201-86b0-e71eef85ae25 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.218184147Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a3764726-93c8-4900-95dd-3c23d560aa0a name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.218289888Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a3764726-93c8-4900-95dd-3c23d560aa0a name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.218738599Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ef6fe52bb1e13ed725e35c35c597d4a20e74391e71d4b5bfc4155f3b0591215,PodSandboxId:a757a0edfda00dedc96c26a0e4e21b2c55e3a97fbf245201f8ac9dbfd8549245,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1765406537440966635,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5565989548-b9lzz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 371417b7-73ac-4356-be47-7a4351adb918,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2c0dd3c65d44cb4c2dd60a6a98fe18cb804be77ea7ac7801fdc9ae725f0e7fe,PodSandboxId:cd00e58e1b09a2370f7d5c6144c23804cf9cf0d33f16cc69d46fb498f8d9607b,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765406533601657704,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid
: 4d50e361-2d6d-4adc-87c5-9e5bb49dc05f,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bedfdeec08862e902288f7f6777156f97bb6498659cdee0c3d48c3685502380,PodSandboxId:2502bb54f034601ab4bd5ddb931e87c7921e1f4ab7a9a928690b90a25dfbda82,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765406530497018513,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef6f0c10-336b-4f80-ae8e-4fd51f3dc27a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3651e331a1020c43394c0e8efd14697dfff1c5332d8c7b5b4e4729e185c54f,PodSandboxId:d0d7881454c480d1573f0c3bc7cd175d23e49b2440507f0eebbe7aa790542ff0,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765406530351865709,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-8bc26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f796a587-8be2-4454-b9d4-117d209d6c8e,},Anno
tations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dffea6180547c0fb7f39fd03567de4b9c8f9148c25a414e018a404890368d041,PodSandboxId:2193a019676b29d4fb3690fbea93553be33f9de9b08721633fad0b037e3e8f00,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765406502575497383,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-5758569b7
9-plpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 040c7ec9-cc60-49f6-b97a-2ce27fb2bc1c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a84aae63de3aa6f357ceed2b1e319599d70306bdcb4de401cb2d1b5fcb06db,PodSandboxId:7f698858614d1b9377e6a1fbc06575d68973223a6c414c79cd3362747a237a23,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765406501671183957,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-n
ode-connect-9f67c86d4-x9n2b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76e6c958-6c1b-46f7-9691-cae03089c3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c2cb861baf241d5e5c89d2c74ad8aedfba652f029960a3dced673b36f97a1f9,PodSandboxId:908ca8dda937718a296fed3351a48ccdc06d8684b5a3e6396ae2ca2ea7c8a1cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765406475775639936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ztjdq,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90ed420-f6d0-41f8-94c4-4becc272220c,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e45254926301e8b4d9471fba3c1890485a1c6edf0d586b2a01416f02167bd3c,PodSandboxId:fdc30e4bab267c4f9612d9407e086a48fe18f8007172acf720d909934423c539,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c
8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765406475433178786,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69df49db-a6bb-4224-a082-ef172c852dbd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dbf9761fcefaf14aade2d64f8f7da09a4f49842e20164598fb179c8618d0b07,PodSandboxId:5e4cf73b8486e1bbc0a5eb4532785cef545c2ef693ab22914d7cad3b9f648d40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69
e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765406475402961181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8m5bc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 359b24c2-173b-4e8d-a9ad-37699e9c182c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3099f65fc6bc4bb91fa578984b4b45981abc192baf334b35fe0f5ed88763cfd3,PodSandboxId:c525625a23ba79ebccdcc863fc975af74c40e47e1afa9e4f1ce7c0b945358b6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765406471547187137,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f158481c41d7c7ea1ac420460ca09ac,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:820b258590812670328513cb9f8bc63c82759dad552b272a481bd9d9e9ac10ed,PodSandboxId:d4ad154c45805c04df36ccc5d05c0474232207180f45f2e8400973a54440d83f,Metadata:&ContainerMeta
data{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765406471513042941,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a02c9ca4916395fdf029605dc73da32a,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90bbbe9736c30df1e90756d3f1c5c79d9e3f1db3b36276f7a813f73f
f569ff99,PodSandboxId:6276f85b3b765f0c0b123bfdd19e5e6d37a17c073b281ece1da9309ca0425692,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765406471467153367,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1492eee028679664d5b8ec63b14d9ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:04b6e5bb548a3f34c705a7b87dd34b092efa466ead6cc2c441d4b0dca726efa9,PodSandboxId:559b3259c4c0f42fe5ad0a37a34f718bd71838eb41eae69e48d075127983ea17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765406471486129520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acc7cf3e9cb809ca70f2d07445a6529a,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d6052c01241c62e2fe069151e8f691b156be57ec93cdcbd320ad795637d9925,PodSandboxId:3ac5e4b41d294f3426726f2572ed9126b6764f43fed91b13a3cb0a0eabc4bc6b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765406428401832865,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a02c9ca4916395fdf029605dc73da32a,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\
":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25fef9fca1e8199164c4f7510156932a11d8a3da35eabf661c97c7b0d8f94635,PodSandboxId:518a69eb2653beeda6d8de5fa55760f90f76b38c4dd30e99b66e402e50fa1e71,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765406428425322358,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1492eee028679664d5b8ec63b14d9ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf0b75ee627a0448e401acf088bfb7b7b02f88484d3bd8dbddea731b2a690692,PodSandboxId:dfd30d7e19f4aee69eea36450219f701942f60a23c37f372c96474493c6ecdcb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765406428385450802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f
158481c41d7c7ea1ac420460ca09ac,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec2dd2ad58f540358cd2eb0b893a7af790ef825e2b75a75b054265763c10a17,PodSandboxId:db9c896533757da59dae8d1c3f6ed532352c91bd38f4c9f4ba8251386c773740,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765406422338822459,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-8m5bc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 359b24c2-173b-4e8d-a9ad-37699e9c182c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bba76e48a4743ed402b870784343a793ebf4474ddba828e18c5bce1904de47a,PodSandboxId:4263736afb8084c8a9bf9f703a715a76554bc2aae72fc1ac586cf2569be4c43c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765406422341749085,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ztjdq,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: a90ed420-f6d0-41f8-94c4-4becc272220c,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e40900607868b08279b704b4e030e3fcc640199b5a009742f55772920b0a7c92,PodSandboxId:e9e5fca8f4a8a784dab181ab2458952a82da7a3096134e7f17d1107bf99b85c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c93
5de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765406420338616450,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69df49db-a6bb-4224-a082-ef172c852dbd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a3764726-93c8-4900-95dd-3c23d560aa0a name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.249687620Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=58a68b12-644b-479b-afae-becd407d5b7a name=/runtime.v1.RuntimeService/Version
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.249880680Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=58a68b12-644b-479b-afae-becd407d5b7a name=/runtime.v1.RuntimeService/Version
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.251789827Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8fb47ca0-324f-460e-adb5-618e83ab029c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.252772208Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765406813252741247,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:263272,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8fb47ca0-324f-460e-adb5-618e83ab029c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.253952365Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1e7aa8c-9478-44d8-aecf-68af124b9cf4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.254103023Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1e7aa8c-9478-44d8-aecf-68af124b9cf4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 22:46:53 functional-497660 crio[5818]: time="2025-12-10 22:46:53.254548070Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ef6fe52bb1e13ed725e35c35c597d4a20e74391e71d4b5bfc4155f3b0591215,PodSandboxId:a757a0edfda00dedc96c26a0e4e21b2c55e3a97fbf245201f8ac9dbfd8549245,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1765406537440966635,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5565989548-b9lzz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 371417b7-73ac-4356-be47-7a4351adb918,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2c0dd3c65d44cb4c2dd60a6a98fe18cb804be77ea7ac7801fdc9ae725f0e7fe,PodSandboxId:cd00e58e1b09a2370f7d5c6144c23804cf9cf0d33f16cc69d46fb498f8d9607b,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765406533601657704,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid
: 4d50e361-2d6d-4adc-87c5-9e5bb49dc05f,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bedfdeec08862e902288f7f6777156f97bb6498659cdee0c3d48c3685502380,PodSandboxId:2502bb54f034601ab4bd5ddb931e87c7921e1f4ab7a9a928690b90a25dfbda82,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765406530497018513,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef6f0c10-336b-4f80-ae8e-4fd51f3dc27a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3651e331a1020c43394c0e8efd14697dfff1c5332d8c7b5b4e4729e185c54f,PodSandboxId:d0d7881454c480d1573f0c3bc7cd175d23e49b2440507f0eebbe7aa790542ff0,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765406530351865709,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-8bc26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f796a587-8be2-4454-b9d4-117d209d6c8e,},Anno
tations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dffea6180547c0fb7f39fd03567de4b9c8f9148c25a414e018a404890368d041,PodSandboxId:2193a019676b29d4fb3690fbea93553be33f9de9b08721633fad0b037e3e8f00,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765406502575497383,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-5758569b7
9-plpbx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 040c7ec9-cc60-49f6-b97a-2ce27fb2bc1c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a84aae63de3aa6f357ceed2b1e319599d70306bdcb4de401cb2d1b5fcb06db,PodSandboxId:7f698858614d1b9377e6a1fbc06575d68973223a6c414c79cd3362747a237a23,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765406501671183957,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-n
ode-connect-9f67c86d4-x9n2b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 76e6c958-6c1b-46f7-9691-cae03089c3c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c2cb861baf241d5e5c89d2c74ad8aedfba652f029960a3dced673b36f97a1f9,PodSandboxId:908ca8dda937718a296fed3351a48ccdc06d8684b5a3e6396ae2ca2ea7c8a1cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765406475775639936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ztjdq,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90ed420-f6d0-41f8-94c4-4becc272220c,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e45254926301e8b4d9471fba3c1890485a1c6edf0d586b2a01416f02167bd3c,PodSandboxId:fdc30e4bab267c4f9612d9407e086a48fe18f8007172acf720d909934423c539,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c
8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765406475433178786,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69df49db-a6bb-4224-a082-ef172c852dbd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dbf9761fcefaf14aade2d64f8f7da09a4f49842e20164598fb179c8618d0b07,PodSandboxId:5e4cf73b8486e1bbc0a5eb4532785cef545c2ef693ab22914d7cad3b9f648d40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69
e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765406475402961181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8m5bc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 359b24c2-173b-4e8d-a9ad-37699e9c182c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3099f65fc6bc4bb91fa578984b4b45981abc192baf334b35fe0f5ed88763cfd3,PodSandboxId:c525625a23ba79ebccdcc863fc975af74c40e47e1afa9e4f1ce7c0b945358b6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765406471547187137,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f158481c41d7c7ea1ac420460ca09ac,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:820b258590812670328513cb9f8bc63c82759dad552b272a481bd9d9e9ac10ed,PodSandboxId:d4ad154c45805c04df36ccc5d05c0474232207180f45f2e8400973a54440d83f,Metadata:&ContainerMeta
data{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765406471513042941,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a02c9ca4916395fdf029605dc73da32a,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90bbbe9736c30df1e90756d3f1c5c79d9e3f1db3b36276f7a813f73f
f569ff99,PodSandboxId:6276f85b3b765f0c0b123bfdd19e5e6d37a17c073b281ece1da9309ca0425692,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765406471467153367,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1492eee028679664d5b8ec63b14d9ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:04b6e5bb548a3f34c705a7b87dd34b092efa466ead6cc2c441d4b0dca726efa9,PodSandboxId:559b3259c4c0f42fe5ad0a37a34f718bd71838eb41eae69e48d075127983ea17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765406471486129520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acc7cf3e9cb809ca70f2d07445a6529a,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d6052c01241c62e2fe069151e8f691b156be57ec93cdcbd320ad795637d9925,PodSandboxId:3ac5e4b41d294f3426726f2572ed9126b6764f43fed91b13a3cb0a0eabc4bc6b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765406428401832865,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a02c9ca4916395fdf029605dc73da32a,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\
":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25fef9fca1e8199164c4f7510156932a11d8a3da35eabf661c97c7b0d8f94635,PodSandboxId:518a69eb2653beeda6d8de5fa55760f90f76b38c4dd30e99b66e402e50fa1e71,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765406428425322358,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1492eee028679664d5b8ec63b14d9ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf0b75ee627a0448e401acf088bfb7b7b02f88484d3bd8dbddea731b2a690692,PodSandboxId:dfd30d7e19f4aee69eea36450219f701942f60a23c37f372c96474493c6ecdcb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765406428385450802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-497660,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f
158481c41d7c7ea1ac420460ca09ac,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec2dd2ad58f540358cd2eb0b893a7af790ef825e2b75a75b054265763c10a17,PodSandboxId:db9c896533757da59dae8d1c3f6ed532352c91bd38f4c9f4ba8251386c773740,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765406422338822459,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-8m5bc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 359b24c2-173b-4e8d-a9ad-37699e9c182c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bba76e48a4743ed402b870784343a793ebf4474ddba828e18c5bce1904de47a,PodSandboxId:4263736afb8084c8a9bf9f703a715a76554bc2aae72fc1ac586cf2569be4c43c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765406422341749085,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ztjdq,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: a90ed420-f6d0-41f8-94c4-4becc272220c,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e40900607868b08279b704b4e030e3fcc640199b5a009742f55772920b0a7c92,PodSandboxId:e9e5fca8f4a8a784dab181ab2458952a82da7a3096134e7f17d1107bf99b85c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c93
5de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765406420338616450,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69df49db-a6bb-4224-a082-ef172c852dbd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1e7aa8c-9478-44d8-aecf-68af124b9cf4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	5ef6fe52bb1e1       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   4 minutes ago       Running             dashboard-metrics-scraper   0                   a757a0edfda00       dashboard-metrics-scraper-5565989548-b9lzz   kubernetes-dashboard
	e2c0dd3c65d44       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              4 minutes ago       Exited              mount-munger                0                   cd00e58e1b09a       busybox-mount                                default
	2bedfdeec0886       a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c                                                 4 minutes ago       Running             myfrontend                  0                   2502bb54f0346       sp-pod                                       default
	9d3651e331a10       public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036      4 minutes ago       Running             mysql                       0                   d0d7881454c48       mysql-7d7b65bc95-8bc26                       default
	dffea6180547c       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6            5 minutes ago       Running             echo-server                 0                   2193a019676b2       hello-node-5758569b79-plpbx                  default
	20a84aae63de3       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6            5 minutes ago       Running             echo-server                 0                   7f698858614d1       hello-node-connect-9f67c86d4-x9n2b           default
	0c2cb861baf24       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                 5 minutes ago       Running             coredns                     3                   908ca8dda9377       coredns-7d764666f9-ztjdq                     kube-system
	1e45254926301       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 5 minutes ago       Running             storage-provisioner         3                   fdc30e4bab267       storage-provisioner                          kube-system
	5dbf9761fcefa       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                                 5 minutes ago       Running             kube-proxy                  3                   5e4cf73b8486e       kube-proxy-8m5bc                             kube-system
	3099f65fc6bc4       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                                 5 minutes ago       Running             kube-controller-manager     3                   c525625a23ba7       kube-controller-manager-functional-497660    kube-system
	820b258590812       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                                 5 minutes ago       Running             kube-scheduler              3                   d4ad154c45805       kube-scheduler-functional-497660             kube-system
	04b6e5bb548a3       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                                 5 minutes ago       Running             kube-apiserver              0                   559b3259c4c0f       kube-apiserver-functional-497660             kube-system
	90bbbe9736c30       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 5 minutes ago       Running             etcd                        3                   6276f85b3b765       etcd-functional-497660                       kube-system
	25fef9fca1e81       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 6 minutes ago       Exited              etcd                        2                   518a69eb2653b       etcd-functional-497660                       kube-system
	7d6052c01241c       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                                 6 minutes ago       Exited              kube-scheduler              2                   3ac5e4b41d294       kube-scheduler-functional-497660             kube-system
	bf0b75ee627a0       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                                 6 minutes ago       Exited              kube-controller-manager     2                   dfd30d7e19f4a       kube-controller-manager-functional-497660    kube-system
	5bba76e48a474       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                 6 minutes ago       Exited              coredns                     2                   4263736afb808       coredns-7d764666f9-ztjdq                     kube-system
	0ec2dd2ad58f5       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                                 6 minutes ago       Exited              kube-proxy                  2                   db9c896533757       kube-proxy-8m5bc                             kube-system
	e40900607868b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 6 minutes ago       Exited              storage-provisioner         2                   e9e5fca8f4a8a       storage-provisioner                          kube-system
	
	
	==> coredns [0c2cb861baf241d5e5c89d2c74ad8aedfba652f029960a3dced673b36f97a1f9] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:40342 - 52772 "HINFO IN 1751246409838724863.3753942894579332421. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032543633s
	
	
	==> coredns [5bba76e48a4743ed402b870784343a793ebf4474ddba828e18c5bce1904de47a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:47916 - 43098 "HINFO IN 264988328300255009.728859986800136921. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.025187827s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-497660
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-497660
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=functional-497660
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T22_39_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 22:39:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-497660
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 22:46:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 22:42:46 +0000   Wed, 10 Dec 2025 22:39:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 22:42:46 +0000   Wed, 10 Dec 2025 22:39:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 22:42:46 +0000   Wed, 10 Dec 2025 22:39:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 22:42:46 +0000   Wed, 10 Dec 2025 22:39:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    functional-497660
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 013704e7fefb40009cb54950766f4547
	  System UUID:                013704e7-fefb-4000-9cb5-4950766f4547
	  Boot ID:                    f7781a36-f82e-4e8c-8f0d-4e9c49879fc4
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-plpbx                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  default                     hello-node-connect-9f67c86d4-x9n2b            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  default                     mysql-7d7b65bc95-8bc26                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    5m8s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 coredns-7d764666f9-ztjdq                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m39s
	  kube-system                 etcd-functional-497660                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m44s
	  kube-system                 kube-apiserver-functional-497660              250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-controller-manager-functional-497660     200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m44s
	  kube-system                 kube-proxy-8m5bc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m39s
	  kube-system                 kube-scheduler-functional-497660              100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m44s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m37s
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-b9lzz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-zftg8          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  7m40s  node-controller  Node functional-497660 event: Registered Node functional-497660 in Controller
	  Normal  RegisteredNode  6m45s  node-controller  Node functional-497660 event: Registered Node functional-497660 in Controller
	  Normal  RegisteredNode  6m19s  node-controller  Node functional-497660 event: Registered Node functional-497660 in Controller
	  Normal  RegisteredNode  5m36s  node-controller  Node functional-497660 event: Registered Node functional-497660 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.084321] kauditd_printk_skb: 1 callbacks suppressed
	[Dec10 22:39] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.135245] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.115294] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.186921] kauditd_printk_skb: 248 callbacks suppressed
	[ +30.086737] kauditd_printk_skb: 45 callbacks suppressed
	[Dec10 22:40] kauditd_printk_skb: 327 callbacks suppressed
	[  +8.619212] kauditd_printk_skb: 29 callbacks suppressed
	[  +2.006224] kauditd_printk_skb: 8 callbacks suppressed
	[  +0.128697] kauditd_printk_skb: 20 callbacks suppressed
	[  +1.864043] kauditd_printk_skb: 84 callbacks suppressed
	[ +11.177045] kauditd_printk_skb: 2 callbacks suppressed
	[Dec10 22:41] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.944521] kauditd_printk_skb: 78 callbacks suppressed
	[  +4.324649] kauditd_printk_skb: 161 callbacks suppressed
	[ +17.978960] kauditd_printk_skb: 133 callbacks suppressed
	[  +0.000064] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.000051] kauditd_printk_skb: 122 callbacks suppressed
	[  +0.000041] kauditd_printk_skb: 74 callbacks suppressed
	[Dec10 22:42] kauditd_printk_skb: 125 callbacks suppressed
	[  +0.000072] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.454782] kauditd_printk_skb: 77 callbacks suppressed
	[  +1.710221] crun[10270]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +3.929744] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [25fef9fca1e8199164c4f7510156932a11d8a3da35eabf661c97c7b0d8f94635] <==
	{"level":"warn","ts":"2025-12-10T22:40:30.456814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:40:30.479192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:40:30.492647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:40:30.500344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:40:30.506308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:40:30.515475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T22:40:30.562463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46834","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-10T22:40:54.415981Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-10T22:40:54.416146Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-497660","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.7:2380"],"advertise-client-urls":["https://192.168.39.7:2379"]}
	{"level":"error","ts":"2025-12-10T22:40:54.417691Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T22:40:54.492514Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T22:40:54.492603Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T22:40:54.492623Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"bb39151d8411994b","current-leader-member-id":"bb39151d8411994b"}
	{"level":"info","ts":"2025-12-10T22:40:54.492703Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-10T22:40:54.492712Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-10T22:40:54.492840Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.7:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T22:40:54.492935Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.7:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T22:40:54.492947Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.7:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-10T22:40:54.493007Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T22:40:54.493017Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T22:40:54.493024Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T22:40:54.496469Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.7:2380"}
	{"level":"error","ts":"2025-12-10T22:40:54.496551Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.7:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T22:40:54.496589Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.7:2380"}
	{"level":"info","ts":"2025-12-10T22:40:54.496595Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-497660","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.7:2380"],"advertise-client-urls":["https://192.168.39.7:2379"]}
	
	
	==> etcd [90bbbe9736c30df1e90756d3f1c5c79d9e3f1db3b36276f7a813f73ff569ff99] <==
	{"level":"info","ts":"2025-12-10T22:42:01.609879Z","caller":"traceutil/trace.go:172","msg":"trace[1422315578] transaction","detail":"{read_only:false; response_revision:851; number_of_response:1; }","duration":"210.622269ms","start":"2025-12-10T22:42:01.399214Z","end":"2025-12-10T22:42:01.609837Z","steps":["trace[1422315578] 'process raft request'  (duration: 210.390166ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T22:42:04.874577Z","caller":"traceutil/trace.go:172","msg":"trace[2071680594] transaction","detail":"{read_only:false; response_revision:854; number_of_response:1; }","duration":"245.191269ms","start":"2025-12-10T22:42:04.629372Z","end":"2025-12-10T22:42:04.874563Z","steps":["trace[2071680594] 'process raft request'  (duration: 245.079341ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T22:42:06.011895Z","caller":"traceutil/trace.go:172","msg":"trace[1057918841] linearizableReadLoop","detail":"{readStateIndex:946; appliedIndex:946; }","duration":"283.510458ms","start":"2025-12-10T22:42:05.728369Z","end":"2025-12-10T22:42:06.011879Z","steps":["trace[1057918841] 'read index received'  (duration: 283.501179ms)","trace[1057918841] 'applied index is now lower than readState.Index'  (duration: 5.084µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T22:42:06.012688Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"284.264426ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2025-12-10T22:42:06.012790Z","caller":"traceutil/trace.go:172","msg":"trace[544937593] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:861; }","duration":"284.416359ms","start":"2025-12-10T22:42:05.728364Z","end":"2025-12-10T22:42:06.012780Z","steps":["trace[544937593] 'agreement among raft nodes before linearized reading'  (duration: 284.161092ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T22:42:06.012958Z","caller":"traceutil/trace.go:172","msg":"trace[1413757803] transaction","detail":"{read_only:false; response_revision:862; number_of_response:1; }","duration":"289.025488ms","start":"2025-12-10T22:42:05.723922Z","end":"2025-12-10T22:42:06.012948Z","steps":["trace[1413757803] 'process raft request'  (duration: 288.197764ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T22:42:06.013166Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.584988ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T22:42:06.013480Z","caller":"traceutil/trace.go:172","msg":"trace[913642497] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:862; }","duration":"130.902515ms","start":"2025-12-10T22:42:05.882570Z","end":"2025-12-10T22:42:06.013472Z","steps":["trace[913642497] 'agreement among raft nodes before linearized reading'  (duration: 130.567586ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T22:42:06.014891Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.647717ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T22:42:06.015843Z","caller":"traceutil/trace.go:172","msg":"trace[1837952551] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:862; }","duration":"113.67628ms","start":"2025-12-10T22:42:05.901233Z","end":"2025-12-10T22:42:06.014909Z","steps":["trace[1837952551] 'agreement among raft nodes before linearized reading'  (duration: 113.637172ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T22:42:08.380417Z","caller":"traceutil/trace.go:172","msg":"trace[1358579136] linearizableReadLoop","detail":"{readStateIndex:948; appliedIndex:948; }","duration":"320.712084ms","start":"2025-12-10T22:42:08.059631Z","end":"2025-12-10T22:42:08.380343Z","steps":["trace[1358579136] 'read index received'  (duration: 320.706112ms)","trace[1358579136] 'applied index is now lower than readState.Index'  (duration: 4.926µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T22:42:08.380520Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"320.875972ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T22:42:08.380507Z","caller":"traceutil/trace.go:172","msg":"trace[107869620] transaction","detail":"{read_only:false; response_revision:864; number_of_response:1; }","duration":"334.223712ms","start":"2025-12-10T22:42:08.046273Z","end":"2025-12-10T22:42:08.380497Z","steps":["trace[107869620] 'process raft request'  (duration: 334.089798ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T22:42:08.380539Z","caller":"traceutil/trace.go:172","msg":"trace[275020461] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:864; }","duration":"320.90721ms","start":"2025-12-10T22:42:08.059626Z","end":"2025-12-10T22:42:08.380534Z","steps":["trace[275020461] 'agreement among raft nodes before linearized reading'  (duration: 320.841974ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T22:42:08.380555Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-10T22:42:08.059611Z","time spent":"320.94116ms","remote":"127.0.0.1:57160","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2025-12-10T22:42:08.380970Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-10T22:42:08.046259Z","time spent":"334.291086ms","remote":"127.0.0.1:57480","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:863 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-12-10T22:42:25.274088Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":11046092980136354152,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-12-10T22:42:25.305921Z","caller":"traceutil/trace.go:172","msg":"trace[608924237] linearizableReadLoop","detail":"{readStateIndex:989; appliedIndex:989; }","duration":"532.551684ms","start":"2025-12-10T22:42:24.773306Z","end":"2025-12-10T22:42:25.305858Z","steps":["trace[608924237] 'read index received'  (duration: 532.546401ms)","trace[608924237] 'applied index is now lower than readState.Index'  (duration: 4.5µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T22:42:25.306971Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"533.632349ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:799"}
	{"level":"info","ts":"2025-12-10T22:42:25.307025Z","caller":"traceutil/trace.go:172","msg":"trace[1836516224] range","detail":"{range_begin:/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:900; }","duration":"533.712008ms","start":"2025-12-10T22:42:24.773302Z","end":"2025-12-10T22:42:25.307014Z","steps":["trace[1836516224] 'agreement among raft nodes before linearized reading'  (duration: 532.702647ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T22:42:25.307059Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-10T22:42:24.773289Z","time spent":"533.759799ms","remote":"127.0.0.1:57480","response type":"/etcdserverpb.KV/Range","request count":0,"request size":74,"response count":1,"response size":822,"request content":"key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" limit:1 "}
	{"level":"warn","ts":"2025-12-10T22:42:25.307223Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"406.181163ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T22:42:25.307257Z","caller":"traceutil/trace.go:172","msg":"trace[493809103] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:901; }","duration":"406.221215ms","start":"2025-12-10T22:42:24.901029Z","end":"2025-12-10T22:42:25.307250Z","steps":["trace[493809103] 'agreement among raft nodes before linearized reading'  (duration: 406.152135ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T22:42:25.307334Z","caller":"traceutil/trace.go:172","msg":"trace[1201117786] transaction","detail":"{read_only:false; response_revision:901; number_of_response:1; }","duration":"761.835603ms","start":"2025-12-10T22:42:24.545490Z","end":"2025-12-10T22:42:25.307326Z","steps":["trace[1201117786] 'process raft request'  (duration: 760.422333ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T22:42:25.307483Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-10T22:42:24.545459Z","time spent":"761.966287ms","remote":"127.0.0.1:57480","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:900 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 22:46:53 up 8 min,  0 users,  load average: 0.37, 0.80, 0.47
	Linux functional-497660 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [04b6e5bb548a3f34c705a7b87dd34b092efa466ead6cc2c441d4b0dca726efa9] <==
	I1210 22:41:14.196948       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1210 22:41:14.197128       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1210 22:41:14.213337       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 22:41:14.789455       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 22:41:14.901609       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1210 22:41:15.697084       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 22:41:15.781919       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 22:41:15.822791       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 22:41:15.839278       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 22:41:17.547336       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 22:41:17.692998       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 22:41:34.003466       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.122.40"}
	I1210 22:41:38.802762       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 22:41:38.921762       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.117.225"}
	I1210 22:41:40.771996       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.98.120"}
	I1210 22:41:45.828698       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.99.19.115"}
	I1210 22:41:53.277592       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 22:41:53.550226       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.203.29"}
	I1210 22:41:53.574201       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.7.122"}
	E1210 22:41:59.576752       1 conn.go:339] Error on socket receive: read tcp 192.168.39.7:8441->192.168.39.1:59868: use of closed network connection
	E1210 22:42:16.202634       1 conn.go:339] Error on socket receive: read tcp 192.168.39.7:8441->192.168.39.1:55970: use of closed network connection
	E1210 22:42:16.356662       1 conn.go:339] Error on socket receive: read tcp 192.168.39.7:8441->192.168.39.1:55982: use of closed network connection
	E1210 22:42:17.918498       1 conn.go:339] Error on socket receive: read tcp 192.168.39.7:8441->192.168.39.1:56010: use of closed network connection
	E1210 22:42:19.179373       1 conn.go:339] Error on socket receive: read tcp 192.168.39.7:8441->192.168.39.1:56036: use of closed network connection
	E1210 22:42:22.320229       1 conn.go:339] Error on socket receive: read tcp 192.168.39.7:8441->192.168.39.1:56064: use of closed network connection
	
	
	==> kube-controller-manager [3099f65fc6bc4bb91fa578984b4b45981abc192baf334b35fe0f5ed88763cfd3] <==
	I1210 22:41:17.300235       1 shared_informer.go:377] "Caches are synced"
	I1210 22:41:17.302354       1 shared_informer.go:377] "Caches are synced"
	I1210 22:41:17.304696       1 shared_informer.go:377] "Caches are synced"
	I1210 22:41:17.304703       1 shared_informer.go:377] "Caches are synced"
	I1210 22:41:17.306065       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1210 22:41:17.308692       1 range_allocator.go:177] "Sending events to api server"
	I1210 22:41:17.304686       1 shared_informer.go:377] "Caches are synced"
	I1210 22:41:17.310605       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1210 22:41:17.310623       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 22:41:17.310638       1 shared_informer.go:377] "Caches are synced"
	I1210 22:41:17.304714       1 shared_informer.go:377] "Caches are synced"
	I1210 22:41:17.304661       1 shared_informer.go:377] "Caches are synced"
	I1210 22:41:17.302382       1 shared_informer.go:377] "Caches are synced"
	I1210 22:41:17.304674       1 shared_informer.go:377] "Caches are synced"
	I1210 22:41:17.304680       1 shared_informer.go:377] "Caches are synced"
	I1210 22:41:17.329092       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 22:41:17.332688       1 shared_informer.go:377] "Caches are synced"
	I1210 22:41:17.404908       1 shared_informer.go:377] "Caches are synced"
	I1210 22:41:17.404938       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1210 22:41:17.404944       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1210 22:41:17.429694       1 shared_informer.go:377] "Caches are synced"
	E1210 22:41:53.391251       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 22:41:53.411857       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 22:41:53.413456       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 22:41:53.427748       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [bf0b75ee627a0448e401acf088bfb7b7b02f88484d3bd8dbddea731b2a690692] <==
	I1210 22:40:34.393088       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:34.393094       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:34.393101       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:34.393107       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:34.393144       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:34.393151       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:34.393873       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:34.393156       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:34.393168       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:34.393162       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:34.398785       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:34.393477       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:34.401542       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:34.401580       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:34.401601       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:34.401634       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:34.401657       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:34.403682       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:34.409665       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:34.425692       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:34.481416       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:34.497783       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:34.497805       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1210 22:40:34.497811       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1210 22:40:34.774713       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [0ec2dd2ad58f540358cd2eb0b893a7af790ef825e2b75a75b054265763c10a17] <==
	I1210 22:40:22.545859       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 22:40:31.346717       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:31.346794       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.7"]
	E1210 22:40:31.346858       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 22:40:31.421776       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1210 22:40:31.421851       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 22:40:31.421880       1 server_linux.go:136] "Using iptables Proxier"
	I1210 22:40:31.433072       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 22:40:31.433329       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1210 22:40:31.433355       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 22:40:31.437369       1 config.go:200] "Starting service config controller"
	I1210 22:40:31.437501       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 22:40:31.437581       1 config.go:106] "Starting endpoint slice config controller"
	I1210 22:40:31.437603       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 22:40:31.437643       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 22:40:31.437670       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 22:40:31.438067       1 config.go:309] "Starting node config controller"
	I1210 22:40:31.438072       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 22:40:31.438077       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 22:40:31.538032       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 22:40:31.538072       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 22:40:31.538076       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [5dbf9761fcefaf14aade2d64f8f7da09a4f49842e20164598fb179c8618d0b07] <==
	I1210 22:41:16.034537       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 22:41:16.138511       1 shared_informer.go:377] "Caches are synced"
	I1210 22:41:16.138537       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.7"]
	E1210 22:41:16.138641       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 22:41:16.242594       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1210 22:41:16.242767       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 22:41:16.242794       1 server_linux.go:136] "Using iptables Proxier"
	I1210 22:41:16.262840       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 22:41:16.263276       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1210 22:41:16.263333       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 22:41:16.272289       1 config.go:309] "Starting node config controller"
	I1210 22:41:16.272317       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 22:41:16.272325       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 22:41:16.272710       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 22:41:16.272717       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 22:41:16.272879       1 config.go:200] "Starting service config controller"
	I1210 22:41:16.272884       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 22:41:16.272897       1 config.go:106] "Starting endpoint slice config controller"
	I1210 22:41:16.272900       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 22:41:16.373463       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 22:41:16.373526       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 22:41:16.373536       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [7d6052c01241c62e2fe069151e8f691b156be57ec93cdcbd320ad795637d9925] <==
	I1210 22:40:29.201832       1 serving.go:386] Generated self-signed cert in-memory
	I1210 22:40:31.262109       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1210 22:40:31.262146       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 22:40:31.269554       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1210 22:40:31.269578       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 22:40:31.269619       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 22:40:31.269629       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 22:40:31.269644       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 22:40:31.269653       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 22:40:31.269836       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 22:40:31.269991       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 22:40:31.370561       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:31.370803       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:31.370818       1 shared_informer.go:377] "Caches are synced"
	I1210 22:40:54.417125       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1210 22:40:54.417176       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1210 22:40:54.417197       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1210 22:40:54.417261       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 22:40:54.417315       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1210 22:40:54.417333       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 22:40:54.417563       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1210 22:40:54.431495       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [820b258590812670328513cb9f8bc63c82759dad552b272a481bd9d9e9ac10ed] <==
	E1210 22:41:14.075807       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1210 22:41:14.076352       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1210 22:41:14.076620       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1210 22:41:14.077045       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1210 22:41:14.086719       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1210 22:41:14.095021       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1210 22:41:14.095263       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1210 22:41:14.095440       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1210 22:41:14.095476       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1210 22:41:14.095631       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1210 22:41:14.097325       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1210 22:41:14.097371       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1210 22:41:14.097461       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1210 22:41:14.097579       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1210 22:41:14.097620       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1210 22:41:14.098500       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1210 22:41:14.098581       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1210 22:41:14.098585       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1210 22:41:14.098648       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1210 22:41:14.098953       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1210 22:41:14.099043       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1210 22:41:14.099075       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1210 22:41:14.099476       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1210 22:41:14.099581       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	I1210 22:41:15.638283       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 10 22:45:50 functional-497660 kubelet[6181]: E1210 22:45:50.796348    6181 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-b9lzz" containerName="dashboard-metrics-scraper"
	Dec 10 22:45:51 functional-497660 kubelet[6181]: E1210 22:45:51.018086    6181 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765406751017872367  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:263272}  inodes_used:{value:116}}"
	Dec 10 22:45:51 functional-497660 kubelet[6181]: E1210 22:45:51.018106    6181 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765406751017872367  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:263272}  inodes_used:{value:116}}"
	Dec 10 22:46:01 functional-497660 kubelet[6181]: E1210 22:46:01.022240    6181 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765406761021924432  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:263272}  inodes_used:{value:116}}"
	Dec 10 22:46:01 functional-497660 kubelet[6181]: E1210 22:46:01.022276    6181 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765406761021924432  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:263272}  inodes_used:{value:116}}"
	Dec 10 22:46:06 functional-497660 kubelet[6181]: E1210 22:46:06.794479    6181 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ztjdq" containerName="coredns"
	Dec 10 22:46:10 functional-497660 kubelet[6181]: E1210 22:46:10.885623    6181 manager.go:1119] Failed to create existing container: /kubepods/burstable/poda90ed420-f6d0-41f8-94c4-4becc272220c/crio-4263736afb8084c8a9bf9f703a715a76554bc2aae72fc1ac586cf2569be4c43c: Error finding container 4263736afb8084c8a9bf9f703a715a76554bc2aae72fc1ac586cf2569be4c43c: Status 404 returned error can't find the container with id 4263736afb8084c8a9bf9f703a715a76554bc2aae72fc1ac586cf2569be4c43c
	Dec 10 22:46:10 functional-497660 kubelet[6181]: E1210 22:46:10.886309    6181 manager.go:1119] Failed to create existing container: /kubepods/besteffort/pod359b24c2-173b-4e8d-a9ad-37699e9c182c/crio-db9c896533757da59dae8d1c3f6ed532352c91bd38f4c9f4ba8251386c773740: Error finding container db9c896533757da59dae8d1c3f6ed532352c91bd38f4c9f4ba8251386c773740: Status 404 returned error can't find the container with id db9c896533757da59dae8d1c3f6ed532352c91bd38f4c9f4ba8251386c773740
	Dec 10 22:46:10 functional-497660 kubelet[6181]: E1210 22:46:10.886787    6181 manager.go:1119] Failed to create existing container: /kubepods/burstable/poda02c9ca4916395fdf029605dc73da32a/crio-3ac5e4b41d294f3426726f2572ed9126b6764f43fed91b13a3cb0a0eabc4bc6b: Error finding container 3ac5e4b41d294f3426726f2572ed9126b6764f43fed91b13a3cb0a0eabc4bc6b: Status 404 returned error can't find the container with id 3ac5e4b41d294f3426726f2572ed9126b6764f43fed91b13a3cb0a0eabc4bc6b
	Dec 10 22:46:10 functional-497660 kubelet[6181]: E1210 22:46:10.887105    6181 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod1492eee028679664d5b8ec63b14d9ea6/crio-518a69eb2653beeda6d8de5fa55760f90f76b38c4dd30e99b66e402e50fa1e71: Error finding container 518a69eb2653beeda6d8de5fa55760f90f76b38c4dd30e99b66e402e50fa1e71: Status 404 returned error can't find the container with id 518a69eb2653beeda6d8de5fa55760f90f76b38c4dd30e99b66e402e50fa1e71
	Dec 10 22:46:10 functional-497660 kubelet[6181]: E1210 22:46:10.887371    6181 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod9f158481c41d7c7ea1ac420460ca09ac/crio-dfd30d7e19f4aee69eea36450219f701942f60a23c37f372c96474493c6ecdcb: Error finding container dfd30d7e19f4aee69eea36450219f701942f60a23c37f372c96474493c6ecdcb: Status 404 returned error can't find the container with id dfd30d7e19f4aee69eea36450219f701942f60a23c37f372c96474493c6ecdcb
	Dec 10 22:46:10 functional-497660 kubelet[6181]: E1210 22:46:10.887917    6181 manager.go:1119] Failed to create existing container: /kubepods/besteffort/pod69df49db-a6bb-4224-a082-ef172c852dbd/crio-e9e5fca8f4a8a784dab181ab2458952a82da7a3096134e7f17d1107bf99b85c9: Error finding container e9e5fca8f4a8a784dab181ab2458952a82da7a3096134e7f17d1107bf99b85c9: Status 404 returned error can't find the container with id e9e5fca8f4a8a784dab181ab2458952a82da7a3096134e7f17d1107bf99b85c9
	Dec 10 22:46:11 functional-497660 kubelet[6181]: E1210 22:46:11.024222    6181 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765406771023778559  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:263272}  inodes_used:{value:116}}"
	Dec 10 22:46:11 functional-497660 kubelet[6181]: E1210 22:46:11.024262    6181 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765406771023778559  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:263272}  inodes_used:{value:116}}"
	Dec 10 22:46:12 functional-497660 kubelet[6181]: E1210 22:46:12.800536    6181 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-497660" containerName="kube-controller-manager"
	Dec 10 22:46:21 functional-497660 kubelet[6181]: E1210 22:46:21.029513    6181 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765406781028714851  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:263272}  inodes_used:{value:116}}"
	Dec 10 22:46:21 functional-497660 kubelet[6181]: E1210 22:46:21.029543    6181 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765406781028714851  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:263272}  inodes_used:{value:116}}"
	Dec 10 22:46:24 functional-497660 kubelet[6181]: E1210 22:46:24.796072    6181 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-497660" containerName="kube-apiserver"
	Dec 10 22:46:31 functional-497660 kubelet[6181]: E1210 22:46:31.032026    6181 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765406791031601353  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:263272}  inodes_used:{value:116}}"
	Dec 10 22:46:31 functional-497660 kubelet[6181]: E1210 22:46:31.032062    6181 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765406791031601353  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:263272}  inodes_used:{value:116}}"
	Dec 10 22:46:35 functional-497660 kubelet[6181]: E1210 22:46:35.795039    6181 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-497660" containerName="etcd"
	Dec 10 22:46:41 functional-497660 kubelet[6181]: E1210 22:46:41.034648    6181 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765406801034275805  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:263272}  inodes_used:{value:116}}"
	Dec 10 22:46:41 functional-497660 kubelet[6181]: E1210 22:46:41.034675    6181 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765406801034275805  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:263272}  inodes_used:{value:116}}"
	Dec 10 22:46:51 functional-497660 kubelet[6181]: E1210 22:46:51.037897    6181 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765406811037248788  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:263272}  inodes_used:{value:116}}"
	Dec 10 22:46:51 functional-497660 kubelet[6181]: E1210 22:46:51.037940    6181 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765406811037248788  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:263272}  inodes_used:{value:116}}"
	
	
	==> storage-provisioner [1e45254926301e8b4d9471fba3c1890485a1c6edf0d586b2a01416f02167bd3c] <==
	W1210 22:46:28.553483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:46:30.557841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:46:30.567284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:46:32.571053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:46:32.575677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:46:34.579497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:46:34.587224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:46:36.591380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:46:36.597051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:46:38.600245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:46:38.604969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:46:40.609034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:46:40.614352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:46:42.618646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:46:42.628161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:46:44.631771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:46:44.638336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:46:46.642230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:46:46.649930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:46:48.653338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:46:48.662992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:46:50.667289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:46:50.671914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:46:52.676602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:46:52.682138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e40900607868b08279b704b4e030e3fcc640199b5a009742f55772920b0a7c92] <==
	I1210 22:40:20.410182       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 22:40:20.418243       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 22:40:20.418379       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E1210 22:40:27.835226       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	W1210 22:40:31.308576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:40:35.570251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:40:39.169587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:40:42.224597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:40:45.247952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:40:48.900278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:40:48.906474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 22:40:48.906615       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 22:40:48.906772       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-497660_b23cd6fd-4661-4ef4-b87c-d8d895d5048d!
	I1210 22:40:48.909227       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f37caf89-da84-4657-9bed-a75cfc6fa267", APIVersion:"v1", ResourceVersion:"544", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-497660_b23cd6fd-4661-4ef4-b87c-d8d895d5048d became leader
	W1210 22:40:48.918908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:40:48.930292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 22:40:49.007212       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-497660_b23cd6fd-4661-4ef4-b87c-d8d895d5048d!
	W1210 22:40:50.934299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:40:50.948161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:40:52.951102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 22:40:52.961096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-497660 -n functional-497660
helpers_test.go:270: (dbg) Run:  kubectl --context functional-497660 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount kubernetes-dashboard-b84665fb8-zftg8
helpers_test.go:283: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-497660 describe pod busybox-mount kubernetes-dashboard-b84665fb8-zftg8
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-497660 describe pod busybox-mount kubernetes-dashboard-b84665fb8-zftg8: exit status 1 (68.581658ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-497660/192.168.39.7
	Start Time:       Wed, 10 Dec 2025 22:41:50 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  cri-o://e2c0dd3c65d44cb4c2dd60a6a98fe18cb804be77ea7ac7801fdc9ae725f0e7fe
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 10 Dec 2025 22:42:13 +0000
	      Finished:     Wed, 10 Dec 2025 22:42:13 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hcr94 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-hcr94:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m4s   default-scheduler  Successfully assigned default/busybox-mount to functional-497660
	  Normal  Pulling    5m2s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m41s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.257s (21.362s including waiting). Image size: 4631262 bytes.
	  Normal  Created    4m41s  kubelet            Container created
	  Normal  Started    4m41s  kubelet            Container started

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-zftg8" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-497660 describe pod busybox-mount kubernetes-dashboard-b84665fb8-zftg8: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (302.06s)

                                                
                                    
x
+
TestPreload (149.05s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-732316 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
E1210 23:22:35.029082    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 23:23:21.737081    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 23:23:38.666125    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-732316 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m31.828771809s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-732316 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-732316 image pull gcr.io/k8s-minikube/busybox: (3.382084894s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-732316
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-732316: (7.981327062s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-732316 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-732316 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (43.274803671s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-732316 image list
preload_test.go:73: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-12-10 23:24:37.763921227 +0000 UTC m=+3531.322749964
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-732316 -n test-preload-732316
helpers_test.go:253: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-732316 logs -n 25
helpers_test.go:261: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-954539 ssh -n multinode-954539-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-954539     │ jenkins │ v1.37.0 │ 10 Dec 25 23:11 UTC │ 10 Dec 25 23:11 UTC │
	│ ssh     │ multinode-954539 ssh -n multinode-954539 sudo cat /home/docker/cp-test_multinode-954539-m03_multinode-954539.txt                                          │ multinode-954539     │ jenkins │ v1.37.0 │ 10 Dec 25 23:11 UTC │ 10 Dec 25 23:11 UTC │
	│ cp      │ multinode-954539 cp multinode-954539-m03:/home/docker/cp-test.txt multinode-954539-m02:/home/docker/cp-test_multinode-954539-m03_multinode-954539-m02.txt │ multinode-954539     │ jenkins │ v1.37.0 │ 10 Dec 25 23:11 UTC │ 10 Dec 25 23:11 UTC │
	│ ssh     │ multinode-954539 ssh -n multinode-954539-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-954539     │ jenkins │ v1.37.0 │ 10 Dec 25 23:11 UTC │ 10 Dec 25 23:11 UTC │
	│ ssh     │ multinode-954539 ssh -n multinode-954539-m02 sudo cat /home/docker/cp-test_multinode-954539-m03_multinode-954539-m02.txt                                  │ multinode-954539     │ jenkins │ v1.37.0 │ 10 Dec 25 23:11 UTC │ 10 Dec 25 23:11 UTC │
	│ node    │ multinode-954539 node stop m03                                                                                                                            │ multinode-954539     │ jenkins │ v1.37.0 │ 10 Dec 25 23:11 UTC │ 10 Dec 25 23:11 UTC │
	│ node    │ multinode-954539 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-954539     │ jenkins │ v1.37.0 │ 10 Dec 25 23:11 UTC │ 10 Dec 25 23:12 UTC │
	│ node    │ list -p multinode-954539                                                                                                                                  │ multinode-954539     │ jenkins │ v1.37.0 │ 10 Dec 25 23:12 UTC │                     │
	│ stop    │ -p multinode-954539                                                                                                                                       │ multinode-954539     │ jenkins │ v1.37.0 │ 10 Dec 25 23:12 UTC │ 10 Dec 25 23:15 UTC │
	│ start   │ -p multinode-954539 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-954539     │ jenkins │ v1.37.0 │ 10 Dec 25 23:15 UTC │ 10 Dec 25 23:17 UTC │
	│ node    │ list -p multinode-954539                                                                                                                                  │ multinode-954539     │ jenkins │ v1.37.0 │ 10 Dec 25 23:17 UTC │                     │
	│ node    │ multinode-954539 node delete m03                                                                                                                          │ multinode-954539     │ jenkins │ v1.37.0 │ 10 Dec 25 23:17 UTC │ 10 Dec 25 23:17 UTC │
	│ stop    │ multinode-954539 stop                                                                                                                                     │ multinode-954539     │ jenkins │ v1.37.0 │ 10 Dec 25 23:17 UTC │ 10 Dec 25 23:19 UTC │
	│ start   │ -p multinode-954539 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-954539     │ jenkins │ v1.37.0 │ 10 Dec 25 23:19 UTC │ 10 Dec 25 23:21 UTC │
	│ node    │ list -p multinode-954539                                                                                                                                  │ multinode-954539     │ jenkins │ v1.37.0 │ 10 Dec 25 23:21 UTC │                     │
	│ start   │ -p multinode-954539-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-954539-m02 │ jenkins │ v1.37.0 │ 10 Dec 25 23:21 UTC │                     │
	│ start   │ -p multinode-954539-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-954539-m03 │ jenkins │ v1.37.0 │ 10 Dec 25 23:21 UTC │ 10 Dec 25 23:22 UTC │
	│ node    │ add -p multinode-954539                                                                                                                                   │ multinode-954539     │ jenkins │ v1.37.0 │ 10 Dec 25 23:22 UTC │                     │
	│ delete  │ -p multinode-954539-m03                                                                                                                                   │ multinode-954539-m03 │ jenkins │ v1.37.0 │ 10 Dec 25 23:22 UTC │ 10 Dec 25 23:22 UTC │
	│ delete  │ -p multinode-954539                                                                                                                                       │ multinode-954539     │ jenkins │ v1.37.0 │ 10 Dec 25 23:22 UTC │ 10 Dec 25 23:22 UTC │
	│ start   │ -p test-preload-732316 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio                                │ test-preload-732316  │ jenkins │ v1.37.0 │ 10 Dec 25 23:22 UTC │ 10 Dec 25 23:23 UTC │
	│ image   │ test-preload-732316 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-732316  │ jenkins │ v1.37.0 │ 10 Dec 25 23:23 UTC │ 10 Dec 25 23:23 UTC │
	│ stop    │ -p test-preload-732316                                                                                                                                    │ test-preload-732316  │ jenkins │ v1.37.0 │ 10 Dec 25 23:23 UTC │ 10 Dec 25 23:23 UTC │
	│ start   │ -p test-preload-732316 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                          │ test-preload-732316  │ jenkins │ v1.37.0 │ 10 Dec 25 23:23 UTC │ 10 Dec 25 23:24 UTC │
	│ image   │ test-preload-732316 image list                                                                                                                            │ test-preload-732316  │ jenkins │ v1.37.0 │ 10 Dec 25 23:24 UTC │ 10 Dec 25 23:24 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 23:23:54
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 23:23:54.349771   35456 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:23:54.350061   35456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:23:54.350071   35456 out.go:374] Setting ErrFile to fd 2...
	I1210 23:23:54.350076   35456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:23:54.350311   35456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
	I1210 23:23:54.350961   35456 out.go:368] Setting JSON to false
	I1210 23:23:54.351822   35456 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3975,"bootTime":1765405059,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 23:23:54.351886   35456 start.go:143] virtualization: kvm guest
	I1210 23:23:54.354288   35456 out.go:179] * [test-preload-732316] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 23:23:54.355747   35456 notify.go:221] Checking for updates...
	I1210 23:23:54.355816   35456 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 23:23:54.357363   35456 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:23:54.358749   35456 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5125/kubeconfig
	I1210 23:23:54.360042   35456 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5125/.minikube
	I1210 23:23:54.361222   35456 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 23:23:54.362398   35456 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 23:23:54.364135   35456 config.go:182] Loaded profile config "test-preload-732316": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:23:54.364682   35456 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:23:54.400117   35456 out.go:179] * Using the kvm2 driver based on existing profile
	I1210 23:23:54.401514   35456 start.go:309] selected driver: kvm2
	I1210 23:23:54.401553   35456 start.go:927] validating driver "kvm2" against &{Name:test-preload-732316 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPor
t:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:test-preload-732316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false Ex
traDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:23:54.401656   35456 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 23:23:54.402550   35456 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:23:54.402586   35456 cni.go:84] Creating CNI manager for ""
	I1210 23:23:54.402637   35456 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 23:23:54.402701   35456 start.go:353] cluster config:
	{Name:test-preload-732316 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:test-preload-7323
16 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:23:54.402794   35456 iso.go:125] acquiring lock: {Name:mk1091e707b59a200dfce77f9e85a41a0a31058c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:23:54.404612   35456 out.go:179] * Starting "test-preload-732316" primary control-plane node in "test-preload-732316" cluster
	I1210 23:23:54.405709   35456 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:23:54.405734   35456 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 23:23:54.405741   35456 cache.go:65] Caching tarball of preloaded images
	I1210 23:23:54.405831   35456 preload.go:238] Found /home/jenkins/minikube-integration/22061-5125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 23:23:54.405842   35456 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 23:23:54.405923   35456 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/test-preload-732316/config.json ...
	I1210 23:23:54.406110   35456 start.go:360] acquireMachinesLock for test-preload-732316: {Name:mkee27f251311e7c2b20a9d6393fa289a9410b32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 23:23:54.406155   35456 start.go:364] duration metric: took 26.386µs to acquireMachinesLock for "test-preload-732316"
	I1210 23:23:54.406167   35456 start.go:96] Skipping create...Using existing machine configuration
	I1210 23:23:54.406171   35456 fix.go:54] fixHost starting: 
	I1210 23:23:54.408160   35456 fix.go:112] recreateIfNeeded on test-preload-732316: state=Stopped err=<nil>
	W1210 23:23:54.408181   35456 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 23:23:54.409878   35456 out.go:252] * Restarting existing kvm2 VM for "test-preload-732316" ...
	I1210 23:23:54.409904   35456 main.go:143] libmachine: starting domain...
	I1210 23:23:54.409912   35456 main.go:143] libmachine: ensuring networks are active...
	I1210 23:23:54.410701   35456 main.go:143] libmachine: Ensuring network default is active
	I1210 23:23:54.411081   35456 main.go:143] libmachine: Ensuring network mk-test-preload-732316 is active
	I1210 23:23:54.411532   35456 main.go:143] libmachine: getting domain XML...
	I1210 23:23:54.412623   35456 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-732316</name>
	  <uuid>16408820-5152-4de2-848c-aba5736b632e</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22061-5125/.minikube/machines/test-preload-732316/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22061-5125/.minikube/machines/test-preload-732316/test-preload-732316.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:02:6a:6b'/>
	      <source network='mk-test-preload-732316'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:bc:d6:c4'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1210 23:23:55.710526   35456 main.go:143] libmachine: waiting for domain to start...
	I1210 23:23:55.711827   35456 main.go:143] libmachine: domain is now running
	I1210 23:23:55.711845   35456 main.go:143] libmachine: waiting for IP...
	I1210 23:23:55.712613   35456 main.go:143] libmachine: domain test-preload-732316 has defined MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:23:55.713207   35456 main.go:143] libmachine: domain test-preload-732316 has current primary IP address 192.168.39.175 and MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:23:55.713220   35456 main.go:143] libmachine: found domain IP: 192.168.39.175
	I1210 23:23:55.713228   35456 main.go:143] libmachine: reserving static IP address...
	I1210 23:23:55.713614   35456 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-732316", mac: "52:54:00:02:6a:6b", ip: "192.168.39.175"} in network mk-test-preload-732316: {Iface:virbr1 ExpiryTime:2025-12-11 00:22:25 +0000 UTC Type:0 Mac:52:54:00:02:6a:6b Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-732316 Clientid:01:52:54:00:02:6a:6b}
	I1210 23:23:55.713649   35456 main.go:143] libmachine: skip adding static IP to network mk-test-preload-732316 - found existing host DHCP lease matching {name: "test-preload-732316", mac: "52:54:00:02:6a:6b", ip: "192.168.39.175"}
	I1210 23:23:55.713661   35456 main.go:143] libmachine: reserved static IP address 192.168.39.175 for domain test-preload-732316
	I1210 23:23:55.713669   35456 main.go:143] libmachine: waiting for SSH...
	I1210 23:23:55.713678   35456 main.go:143] libmachine: Getting to WaitForSSH function...
	I1210 23:23:55.716065   35456 main.go:143] libmachine: domain test-preload-732316 has defined MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:23:55.716398   35456 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:6a:6b", ip: ""} in network mk-test-preload-732316: {Iface:virbr1 ExpiryTime:2025-12-11 00:22:25 +0000 UTC Type:0 Mac:52:54:00:02:6a:6b Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-732316 Clientid:01:52:54:00:02:6a:6b}
	I1210 23:23:55.716430   35456 main.go:143] libmachine: domain test-preload-732316 has defined IP address 192.168.39.175 and MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:23:55.716662   35456 main.go:143] libmachine: Using SSH client type: native
	I1210 23:23:55.716890   35456 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1210 23:23:55.716903   35456 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1210 23:23:58.777748   35456 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.175:22: connect: no route to host
	I1210 23:24:04.857743   35456 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.175:22: connect: no route to host
	I1210 23:24:07.974314   35456 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 23:24:07.977914   35456 main.go:143] libmachine: domain test-preload-732316 has defined MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:07.978327   35456 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:6a:6b", ip: ""} in network mk-test-preload-732316: {Iface:virbr1 ExpiryTime:2025-12-11 00:24:05 +0000 UTC Type:0 Mac:52:54:00:02:6a:6b Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-732316 Clientid:01:52:54:00:02:6a:6b}
	I1210 23:24:07.978359   35456 main.go:143] libmachine: domain test-preload-732316 has defined IP address 192.168.39.175 and MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:07.978572   35456 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/test-preload-732316/config.json ...
	I1210 23:24:07.978763   35456 machine.go:94] provisionDockerMachine start ...
	I1210 23:24:07.980845   35456 main.go:143] libmachine: domain test-preload-732316 has defined MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:07.981144   35456 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:6a:6b", ip: ""} in network mk-test-preload-732316: {Iface:virbr1 ExpiryTime:2025-12-11 00:24:05 +0000 UTC Type:0 Mac:52:54:00:02:6a:6b Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-732316 Clientid:01:52:54:00:02:6a:6b}
	I1210 23:24:07.981166   35456 main.go:143] libmachine: domain test-preload-732316 has defined IP address 192.168.39.175 and MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:07.981350   35456 main.go:143] libmachine: Using SSH client type: native
	I1210 23:24:07.981560   35456 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1210 23:24:07.981570   35456 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 23:24:08.097092   35456 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 23:24:08.097119   35456 buildroot.go:166] provisioning hostname "test-preload-732316"
	I1210 23:24:08.100379   35456 main.go:143] libmachine: domain test-preload-732316 has defined MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:08.100792   35456 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:6a:6b", ip: ""} in network mk-test-preload-732316: {Iface:virbr1 ExpiryTime:2025-12-11 00:24:05 +0000 UTC Type:0 Mac:52:54:00:02:6a:6b Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-732316 Clientid:01:52:54:00:02:6a:6b}
	I1210 23:24:08.100814   35456 main.go:143] libmachine: domain test-preload-732316 has defined IP address 192.168.39.175 and MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:08.100987   35456 main.go:143] libmachine: Using SSH client type: native
	I1210 23:24:08.101222   35456 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1210 23:24:08.101234   35456 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-732316 && echo "test-preload-732316" | sudo tee /etc/hostname
	I1210 23:24:08.234212   35456 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-732316
	
	I1210 23:24:08.237690   35456 main.go:143] libmachine: domain test-preload-732316 has defined MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:08.238114   35456 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:6a:6b", ip: ""} in network mk-test-preload-732316: {Iface:virbr1 ExpiryTime:2025-12-11 00:24:05 +0000 UTC Type:0 Mac:52:54:00:02:6a:6b Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-732316 Clientid:01:52:54:00:02:6a:6b}
	I1210 23:24:08.238138   35456 main.go:143] libmachine: domain test-preload-732316 has defined IP address 192.168.39.175 and MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:08.238346   35456 main.go:143] libmachine: Using SSH client type: native
	I1210 23:24:08.238613   35456 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1210 23:24:08.238636   35456 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-732316' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-732316/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-732316' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 23:24:08.363903   35456 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 23:24:08.363933   35456 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22061-5125/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-5125/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-5125/.minikube}
	I1210 23:24:08.363968   35456 buildroot.go:174] setting up certificates
	I1210 23:24:08.363980   35456 provision.go:84] configureAuth start
	I1210 23:24:08.367334   35456 main.go:143] libmachine: domain test-preload-732316 has defined MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:08.367833   35456 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:6a:6b", ip: ""} in network mk-test-preload-732316: {Iface:virbr1 ExpiryTime:2025-12-11 00:24:05 +0000 UTC Type:0 Mac:52:54:00:02:6a:6b Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-732316 Clientid:01:52:54:00:02:6a:6b}
	I1210 23:24:08.367881   35456 main.go:143] libmachine: domain test-preload-732316 has defined IP address 192.168.39.175 and MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:08.370577   35456 main.go:143] libmachine: domain test-preload-732316 has defined MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:08.370987   35456 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:6a:6b", ip: ""} in network mk-test-preload-732316: {Iface:virbr1 ExpiryTime:2025-12-11 00:24:05 +0000 UTC Type:0 Mac:52:54:00:02:6a:6b Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-732316 Clientid:01:52:54:00:02:6a:6b}
	I1210 23:24:08.371013   35456 main.go:143] libmachine: domain test-preload-732316 has defined IP address 192.168.39.175 and MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:08.371171   35456 provision.go:143] copyHostCerts
	I1210 23:24:08.371245   35456 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5125/.minikube/key.pem, removing ...
	I1210 23:24:08.371266   35456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5125/.minikube/key.pem
	I1210 23:24:08.371353   35456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-5125/.minikube/key.pem (1675 bytes)
	I1210 23:24:08.371495   35456 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5125/.minikube/ca.pem, removing ...
	I1210 23:24:08.371508   35456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5125/.minikube/ca.pem
	I1210 23:24:08.371545   35456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-5125/.minikube/ca.pem (1078 bytes)
	I1210 23:24:08.371619   35456 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-5125/.minikube/cert.pem, removing ...
	I1210 23:24:08.371627   35456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-5125/.minikube/cert.pem
	I1210 23:24:08.371652   35456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-5125/.minikube/cert.pem (1123 bytes)
	I1210 23:24:08.371698   35456 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-5125/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca-key.pem org=jenkins.test-preload-732316 san=[127.0.0.1 192.168.39.175 localhost minikube test-preload-732316]
	I1210 23:24:08.427061   35456 provision.go:177] copyRemoteCerts
	I1210 23:24:08.427134   35456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 23:24:08.430157   35456 main.go:143] libmachine: domain test-preload-732316 has defined MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:08.430799   35456 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:6a:6b", ip: ""} in network mk-test-preload-732316: {Iface:virbr1 ExpiryTime:2025-12-11 00:24:05 +0000 UTC Type:0 Mac:52:54:00:02:6a:6b Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-732316 Clientid:01:52:54:00:02:6a:6b}
	I1210 23:24:08.430828   35456 main.go:143] libmachine: domain test-preload-732316 has defined IP address 192.168.39.175 and MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:08.430999   35456 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/test-preload-732316/id_rsa Username:docker}
	I1210 23:24:08.520343   35456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 23:24:08.553419   35456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1210 23:24:08.582688   35456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 23:24:08.611880   35456 provision.go:87] duration metric: took 247.886783ms to configureAuth
	I1210 23:24:08.611915   35456 buildroot.go:189] setting minikube options for container-runtime
	I1210 23:24:08.612093   35456 config.go:182] Loaded profile config "test-preload-732316": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:24:08.615081   35456 main.go:143] libmachine: domain test-preload-732316 has defined MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:08.615533   35456 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:6a:6b", ip: ""} in network mk-test-preload-732316: {Iface:virbr1 ExpiryTime:2025-12-11 00:24:05 +0000 UTC Type:0 Mac:52:54:00:02:6a:6b Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-732316 Clientid:01:52:54:00:02:6a:6b}
	I1210 23:24:08.615567   35456 main.go:143] libmachine: domain test-preload-732316 has defined IP address 192.168.39.175 and MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:08.615755   35456 main.go:143] libmachine: Using SSH client type: native
	I1210 23:24:08.616024   35456 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1210 23:24:08.616050   35456 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 23:24:08.870955   35456 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 23:24:08.870986   35456 machine.go:97] duration metric: took 892.210333ms to provisionDockerMachine
	I1210 23:24:08.870999   35456 start.go:293] postStartSetup for "test-preload-732316" (driver="kvm2")
	I1210 23:24:08.871009   35456 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 23:24:08.871077   35456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 23:24:08.874172   35456 main.go:143] libmachine: domain test-preload-732316 has defined MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:08.874624   35456 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:6a:6b", ip: ""} in network mk-test-preload-732316: {Iface:virbr1 ExpiryTime:2025-12-11 00:24:05 +0000 UTC Type:0 Mac:52:54:00:02:6a:6b Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-732316 Clientid:01:52:54:00:02:6a:6b}
	I1210 23:24:08.874649   35456 main.go:143] libmachine: domain test-preload-732316 has defined IP address 192.168.39.175 and MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:08.874850   35456 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/test-preload-732316/id_rsa Username:docker}
	I1210 23:24:08.963888   35456 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 23:24:08.968782   35456 info.go:137] Remote host: Buildroot 2025.02
	I1210 23:24:08.968810   35456 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5125/.minikube/addons for local assets ...
	I1210 23:24:08.968873   35456 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5125/.minikube/files for local assets ...
	I1210 23:24:08.968972   35456 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-5125/.minikube/files/etc/ssl/certs/90652.pem -> 90652.pem in /etc/ssl/certs
	I1210 23:24:08.969088   35456 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 23:24:08.980632   35456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/files/etc/ssl/certs/90652.pem --> /etc/ssl/certs/90652.pem (1708 bytes)
	I1210 23:24:09.010523   35456 start.go:296] duration metric: took 139.508974ms for postStartSetup
	I1210 23:24:09.010578   35456 fix.go:56] duration metric: took 14.604405141s for fixHost
	I1210 23:24:09.013671   35456 main.go:143] libmachine: domain test-preload-732316 has defined MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:09.014056   35456 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:6a:6b", ip: ""} in network mk-test-preload-732316: {Iface:virbr1 ExpiryTime:2025-12-11 00:24:05 +0000 UTC Type:0 Mac:52:54:00:02:6a:6b Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-732316 Clientid:01:52:54:00:02:6a:6b}
	I1210 23:24:09.014079   35456 main.go:143] libmachine: domain test-preload-732316 has defined IP address 192.168.39.175 and MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:09.014267   35456 main.go:143] libmachine: Using SSH client type: native
	I1210 23:24:09.014547   35456 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1210 23:24:09.014560   35456 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1210 23:24:09.130889   35456 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765409049.096784999
	
	I1210 23:24:09.130909   35456 fix.go:216] guest clock: 1765409049.096784999
	I1210 23:24:09.130918   35456 fix.go:229] Guest: 2025-12-10 23:24:09.096784999 +0000 UTC Remote: 2025-12-10 23:24:09.010584645 +0000 UTC m=+14.710462000 (delta=86.200354ms)
	I1210 23:24:09.130936   35456 fix.go:200] guest clock delta is within tolerance: 86.200354ms
	I1210 23:24:09.130959   35456 start.go:83] releasing machines lock for "test-preload-732316", held for 14.724780616s
	I1210 23:24:09.134047   35456 main.go:143] libmachine: domain test-preload-732316 has defined MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:09.134508   35456 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:6a:6b", ip: ""} in network mk-test-preload-732316: {Iface:virbr1 ExpiryTime:2025-12-11 00:24:05 +0000 UTC Type:0 Mac:52:54:00:02:6a:6b Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-732316 Clientid:01:52:54:00:02:6a:6b}
	I1210 23:24:09.134539   35456 main.go:143] libmachine: domain test-preload-732316 has defined IP address 192.168.39.175 and MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:09.135173   35456 ssh_runner.go:195] Run: cat /version.json
	I1210 23:24:09.135279   35456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 23:24:09.138236   35456 main.go:143] libmachine: domain test-preload-732316 has defined MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:09.138675   35456 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:6a:6b", ip: ""} in network mk-test-preload-732316: {Iface:virbr1 ExpiryTime:2025-12-11 00:24:05 +0000 UTC Type:0 Mac:52:54:00:02:6a:6b Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-732316 Clientid:01:52:54:00:02:6a:6b}
	I1210 23:24:09.138697   35456 main.go:143] libmachine: domain test-preload-732316 has defined MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:09.138698   35456 main.go:143] libmachine: domain test-preload-732316 has defined IP address 192.168.39.175 and MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:09.138980   35456 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/test-preload-732316/id_rsa Username:docker}
	I1210 23:24:09.139254   35456 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:6a:6b", ip: ""} in network mk-test-preload-732316: {Iface:virbr1 ExpiryTime:2025-12-11 00:24:05 +0000 UTC Type:0 Mac:52:54:00:02:6a:6b Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-732316 Clientid:01:52:54:00:02:6a:6b}
	I1210 23:24:09.139282   35456 main.go:143] libmachine: domain test-preload-732316 has defined IP address 192.168.39.175 and MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:09.139536   35456 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/test-preload-732316/id_rsa Username:docker}
	I1210 23:24:09.226302   35456 ssh_runner.go:195] Run: systemctl --version
	I1210 23:24:09.265193   35456 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 23:24:09.411778   35456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 23:24:09.419007   35456 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 23:24:09.419108   35456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 23:24:09.440039   35456 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 23:24:09.440069   35456 start.go:496] detecting cgroup driver to use...
	I1210 23:24:09.440138   35456 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 23:24:09.459647   35456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 23:24:09.477823   35456 docker.go:218] disabling cri-docker service (if available) ...
	I1210 23:24:09.477885   35456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 23:24:09.495927   35456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 23:24:09.513999   35456 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 23:24:09.663486   35456 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 23:24:09.877562   35456 docker.go:234] disabling docker service ...
	I1210 23:24:09.877659   35456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 23:24:09.893913   35456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 23:24:09.909584   35456 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 23:24:10.072406   35456 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 23:24:10.217298   35456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 23:24:10.234638   35456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 23:24:10.258634   35456 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 23:24:10.258693   35456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:24:10.271540   35456 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 23:24:10.271602   35456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:24:10.285810   35456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:24:10.298564   35456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:24:10.312954   35456 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 23:24:10.327131   35456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:24:10.341127   35456 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:24:10.362555   35456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:24:10.375278   35456 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 23:24:10.386543   35456 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 23:24:10.386602   35456 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 23:24:10.407728   35456 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 23:24:10.419964   35456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:24:10.565017   35456 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 23:24:10.690957   35456 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 23:24:10.691036   35456 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 23:24:10.696696   35456 start.go:564] Will wait 60s for crictl version
	I1210 23:24:10.696766   35456 ssh_runner.go:195] Run: which crictl
	I1210 23:24:10.701261   35456 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 23:24:10.737985   35456 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 23:24:10.738058   35456 ssh_runner.go:195] Run: crio --version
	I1210 23:24:10.768494   35456 ssh_runner.go:195] Run: crio --version
	I1210 23:24:10.799796   35456 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1210 23:24:10.803630   35456 main.go:143] libmachine: domain test-preload-732316 has defined MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:10.804229   35456 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:6a:6b", ip: ""} in network mk-test-preload-732316: {Iface:virbr1 ExpiryTime:2025-12-11 00:24:05 +0000 UTC Type:0 Mac:52:54:00:02:6a:6b Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-732316 Clientid:01:52:54:00:02:6a:6b}
	I1210 23:24:10.804258   35456 main.go:143] libmachine: domain test-preload-732316 has defined IP address 192.168.39.175 and MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:10.804471   35456 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 23:24:10.809121   35456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:24:10.824325   35456 kubeadm.go:884] updating cluster {Name:test-preload-732316 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.34.2 ClusterName:test-preload-732316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 23:24:10.824548   35456 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:24:10.824604   35456 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:24:10.859954   35456 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1210 23:24:10.860021   35456 ssh_runner.go:195] Run: which lz4
	I1210 23:24:10.864859   35456 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 23:24:10.870057   35456 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 23:24:10.870098   35456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1210 23:24:12.173113   35456 crio.go:462] duration metric: took 1.308286462s to copy over tarball
	I1210 23:24:12.173195   35456 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 23:24:13.575613   35456 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.402392987s)
	I1210 23:24:13.575642   35456 crio.go:469] duration metric: took 1.40250253s to extract the tarball
	I1210 23:24:13.575650   35456 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 23:24:13.616621   35456 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:24:13.661660   35456 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:24:13.661684   35456 cache_images.go:86] Images are preloaded, skipping loading
	I1210 23:24:13.661691   35456 kubeadm.go:935] updating node { 192.168.39.175  8443 v1.34.2 crio true true} ...
	I1210 23:24:13.661785   35456 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-732316 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:test-preload-732316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 23:24:13.661849   35456 ssh_runner.go:195] Run: crio config
	I1210 23:24:13.717007   35456 cni.go:84] Creating CNI manager for ""
	I1210 23:24:13.717037   35456 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 23:24:13.717055   35456 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 23:24:13.717088   35456 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.175 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-732316 NodeName:test-preload-732316 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.175"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.175 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 23:24:13.717237   35456 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.175
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-732316"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.175"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.175"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 23:24:13.717315   35456 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 23:24:13.729377   35456 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 23:24:13.729469   35456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 23:24:13.741227   35456 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1210 23:24:13.762398   35456 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 23:24:13.782394   35456 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1210 23:24:13.804380   35456 ssh_runner.go:195] Run: grep 192.168.39.175	control-plane.minikube.internal$ /etc/hosts
	I1210 23:24:13.808902   35456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.175	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:24:13.823429   35456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:24:13.969217   35456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:24:14.000412   35456 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/test-preload-732316 for IP: 192.168.39.175
	I1210 23:24:14.000475   35456 certs.go:195] generating shared ca certs ...
	I1210 23:24:14.000499   35456 certs.go:227] acquiring lock for ca certs: {Name:mkea05d5a03ad9931f0e4f58a8f8d8a307addad1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:24:14.000684   35456 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-5125/.minikube/ca.key
	I1210 23:24:14.000740   35456 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-5125/.minikube/proxy-client-ca.key
	I1210 23:24:14.000751   35456 certs.go:257] generating profile certs ...
	I1210 23:24:14.000831   35456 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/test-preload-732316/client.key
	I1210 23:24:14.000891   35456 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/test-preload-732316/apiserver.key.307ca644
	I1210 23:24:14.000930   35456 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/test-preload-732316/proxy-client.key
	I1210 23:24:14.001054   35456 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/9065.pem (1338 bytes)
	W1210 23:24:14.001089   35456 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-5125/.minikube/certs/9065_empty.pem, impossibly tiny 0 bytes
	I1210 23:24:14.001096   35456 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 23:24:14.001118   35456 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca.pem (1078 bytes)
	I1210 23:24:14.001138   35456 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/cert.pem (1123 bytes)
	I1210 23:24:14.001163   35456 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/key.pem (1675 bytes)
	I1210 23:24:14.001210   35456 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5125/.minikube/files/etc/ssl/certs/90652.pem (1708 bytes)
	I1210 23:24:14.001854   35456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 23:24:14.035677   35456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 23:24:14.068357   35456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 23:24:14.101856   35456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 23:24:14.131887   35456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/test-preload-732316/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1210 23:24:14.161640   35456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/test-preload-732316/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 23:24:14.192706   35456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/test-preload-732316/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 23:24:14.223621   35456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/test-preload-732316/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 23:24:14.253679   35456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/certs/9065.pem --> /usr/share/ca-certificates/9065.pem (1338 bytes)
	I1210 23:24:14.282741   35456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/files/etc/ssl/certs/90652.pem --> /usr/share/ca-certificates/90652.pem (1708 bytes)
	I1210 23:24:14.312051   35456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 23:24:14.341507   35456 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 23:24:14.364183   35456 ssh_runner.go:195] Run: openssl version
	I1210 23:24:14.370574   35456 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:24:14.382032   35456 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 23:24:14.394323   35456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:24:14.399515   35456 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 22:26 /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:24:14.399613   35456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:24:14.406828   35456 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 23:24:14.418897   35456 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 23:24:14.431280   35456 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9065.pem
	I1210 23:24:14.443631   35456 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9065.pem /etc/ssl/certs/9065.pem
	I1210 23:24:14.455771   35456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9065.pem
	I1210 23:24:14.461236   35456 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 22:38 /usr/share/ca-certificates/9065.pem
	I1210 23:24:14.461291   35456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9065.pem
	I1210 23:24:14.468463   35456 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 23:24:14.480345   35456 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9065.pem /etc/ssl/certs/51391683.0
	I1210 23:24:14.492184   35456 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/90652.pem
	I1210 23:24:14.503715   35456 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/90652.pem /etc/ssl/certs/90652.pem
	I1210 23:24:14.515556   35456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/90652.pem
	I1210 23:24:14.520812   35456 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 22:38 /usr/share/ca-certificates/90652.pem
	I1210 23:24:14.520882   35456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/90652.pem
	I1210 23:24:14.527989   35456 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 23:24:14.539540   35456 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/90652.pem /etc/ssl/certs/3ec20f2e.0
	I1210 23:24:14.551318   35456 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 23:24:14.556469   35456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 23:24:14.564346   35456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 23:24:14.572053   35456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 23:24:14.579419   35456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 23:24:14.586788   35456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 23:24:14.594158   35456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 23:24:14.601421   35456 kubeadm.go:401] StartCluster: {Name:test-preload-732316 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.34.2 ClusterName:test-preload-732316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:24:14.601511   35456 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:24:14.601579   35456 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:24:14.636304   35456 cri.go:89] found id: ""
	I1210 23:24:14.636387   35456 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 23:24:14.648908   35456 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 23:24:14.648929   35456 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 23:24:14.648973   35456 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 23:24:14.660776   35456 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 23:24:14.661200   35456 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-732316" does not appear in /home/jenkins/minikube-integration/22061-5125/kubeconfig
	I1210 23:24:14.661297   35456 kubeconfig.go:62] /home/jenkins/minikube-integration/22061-5125/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-732316" cluster setting kubeconfig missing "test-preload-732316" context setting]
	I1210 23:24:14.661580   35456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/kubeconfig: {Name:mkc997741ee5522db4814beb6df9db1a27fdfa83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:24:14.662046   35456 kapi.go:59] client config for test-preload-732316: &rest.Config{Host:"https://192.168.39.175:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22061-5125/.minikube/profiles/test-preload-732316/client.crt", KeyFile:"/home/jenkins/minikube-integration/22061-5125/.minikube/profiles/test-preload-732316/client.key", CAFile:"/home/jenkins/minikube-integration/22061-5125/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 23:24:14.662456   35456 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 23:24:14.662470   35456 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 23:24:14.662475   35456 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 23:24:14.662479   35456 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 23:24:14.662483   35456 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 23:24:14.662786   35456 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 23:24:14.677959   35456 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.175
	I1210 23:24:14.678004   35456 kubeadm.go:1161] stopping kube-system containers ...
	I1210 23:24:14.678019   35456 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 23:24:14.678082   35456 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:24:14.719276   35456 cri.go:89] found id: ""
	I1210 23:24:14.719365   35456 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 23:24:14.744140   35456 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 23:24:14.757130   35456 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 23:24:14.757158   35456 kubeadm.go:158] found existing configuration files:
	
	I1210 23:24:14.757202   35456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 23:24:14.768859   35456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 23:24:14.768932   35456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 23:24:14.781226   35456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 23:24:14.792920   35456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 23:24:14.792984   35456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 23:24:14.805962   35456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 23:24:14.817871   35456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 23:24:14.817939   35456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 23:24:14.830512   35456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 23:24:14.842774   35456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 23:24:14.842855   35456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 23:24:14.855327   35456 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 23:24:14.867692   35456 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 23:24:14.923774   35456 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 23:24:16.110942   35456 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.187125211s)
	I1210 23:24:16.111040   35456 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 23:24:16.364337   35456 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 23:24:16.430344   35456 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 23:24:16.512090   35456 api_server.go:52] waiting for apiserver process to appear ...
	I1210 23:24:16.512181   35456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:24:17.012932   35456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:24:17.512466   35456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:24:18.013233   35456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:24:18.512897   35456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:24:18.563671   35456 api_server.go:72] duration metric: took 2.051582565s to wait for apiserver process to appear ...
	I1210 23:24:18.563697   35456 api_server.go:88] waiting for apiserver healthz status ...
	I1210 23:24:18.563715   35456 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I1210 23:24:18.564272   35456 api_server.go:269] stopped: https://192.168.39.175:8443/healthz: Get "https://192.168.39.175:8443/healthz": dial tcp 192.168.39.175:8443: connect: connection refused
	I1210 23:24:19.063943   35456 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I1210 23:24:21.554662   35456 api_server.go:279] https://192.168.39.175:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 23:24:21.554686   35456 api_server.go:103] status: https://192.168.39.175:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 23:24:21.554701   35456 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I1210 23:24:21.664358   35456 api_server.go:279] https://192.168.39.175:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 23:24:21.664388   35456 api_server.go:103] status: https://192.168.39.175:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 23:24:21.664403   35456 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I1210 23:24:21.676563   35456 api_server.go:279] https://192.168.39.175:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 23:24:21.676595   35456 api_server.go:103] status: https://192.168.39.175:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 23:24:22.064129   35456 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I1210 23:24:22.069489   35456 api_server.go:279] https://192.168.39.175:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 23:24:22.069519   35456 api_server.go:103] status: https://192.168.39.175:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 23:24:22.563777   35456 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I1210 23:24:22.570174   35456 api_server.go:279] https://192.168.39.175:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 23:24:22.570203   35456 api_server.go:103] status: https://192.168.39.175:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 23:24:23.064618   35456 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I1210 23:24:23.076742   35456 api_server.go:279] https://192.168.39.175:8443/healthz returned 200:
	ok
	I1210 23:24:23.086917   35456 api_server.go:141] control plane version: v1.34.2
	I1210 23:24:23.086944   35456 api_server.go:131] duration metric: took 4.523241205s to wait for apiserver health ...
	I1210 23:24:23.086953   35456 cni.go:84] Creating CNI manager for ""
	I1210 23:24:23.086959   35456 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 23:24:23.088632   35456 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 23:24:23.089716   35456 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 23:24:23.115975   35456 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 23:24:23.147244   35456 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 23:24:23.152736   35456 system_pods.go:59] 7 kube-system pods found
	I1210 23:24:23.152814   35456 system_pods.go:61] "coredns-66bc5c9577-bxjql" [3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:24:23.152832   35456 system_pods.go:61] "etcd-test-preload-732316" [057cfbfa-f0f6-42bc-b69f-8d2c9c264edd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 23:24:23.152851   35456 system_pods.go:61] "kube-apiserver-test-preload-732316" [3f6f5d8f-0cd3-4b11-9cf7-f2a1f8de5d04] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 23:24:23.152866   35456 system_pods.go:61] "kube-controller-manager-test-preload-732316" [fabb61dd-81f0-4786-bba0-6a6fc2fb9f14] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 23:24:23.152880   35456 system_pods.go:61] "kube-proxy-5qj4j" [b5f77418-5e7d-4a0e-9d2e-cdec411d474b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 23:24:23.152891   35456 system_pods.go:61] "kube-scheduler-test-preload-732316" [d253b2fb-2df8-4517-901b-507bd6de0645] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 23:24:23.152901   35456 system_pods.go:61] "storage-provisioner" [fa4fa3e3-df0f-41d0-9f59-21d973168ea0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:24:23.152912   35456 system_pods.go:74] duration metric: took 5.640612ms to wait for pod list to return data ...
	I1210 23:24:23.152927   35456 node_conditions.go:102] verifying NodePressure condition ...
	I1210 23:24:23.159020   35456 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 23:24:23.159055   35456 node_conditions.go:123] node cpu capacity is 2
	I1210 23:24:23.159074   35456 node_conditions.go:105] duration metric: took 6.140671ms to run NodePressure ...
	I1210 23:24:23.159136   35456 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 23:24:23.430305   35456 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1210 23:24:23.434746   35456 kubeadm.go:744] kubelet initialised
	I1210 23:24:23.434772   35456 kubeadm.go:745] duration metric: took 4.44239ms waiting for restarted kubelet to initialise ...
	I1210 23:24:23.434786   35456 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 23:24:23.452634   35456 ops.go:34] apiserver oom_adj: -16
	I1210 23:24:23.452662   35456 kubeadm.go:602] duration metric: took 8.803726553s to restartPrimaryControlPlane
	I1210 23:24:23.452672   35456 kubeadm.go:403] duration metric: took 8.851260248s to StartCluster
	I1210 23:24:23.452690   35456 settings.go:142] acquiring lock: {Name:mkb6311113a1595706e930e5ec066489475d2931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:24:23.452777   35456 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-5125/kubeconfig
	I1210 23:24:23.453298   35456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/kubeconfig: {Name:mkc997741ee5522db4814beb6df9db1a27fdfa83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:24:23.453601   35456 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.175 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:24:23.453679   35456 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 23:24:23.453772   35456 addons.go:70] Setting storage-provisioner=true in profile "test-preload-732316"
	I1210 23:24:23.453792   35456 addons.go:239] Setting addon storage-provisioner=true in "test-preload-732316"
	I1210 23:24:23.453799   35456 config.go:182] Loaded profile config "test-preload-732316": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	W1210 23:24:23.453801   35456 addons.go:248] addon storage-provisioner should already be in state true
	I1210 23:24:23.453821   35456 addons.go:70] Setting default-storageclass=true in profile "test-preload-732316"
	I1210 23:24:23.453839   35456 host.go:66] Checking if "test-preload-732316" exists ...
	I1210 23:24:23.453857   35456 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-732316"
	I1210 23:24:23.455985   35456 out.go:179] * Verifying Kubernetes components...
	I1210 23:24:23.456114   35456 kapi.go:59] client config for test-preload-732316: &rest.Config{Host:"https://192.168.39.175:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22061-5125/.minikube/profiles/test-preload-732316/client.crt", KeyFile:"/home/jenkins/minikube-integration/22061-5125/.minikube/profiles/test-preload-732316/client.key", CAFile:"/home/jenkins/minikube-integration/22061-5125/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 23:24:23.456430   35456 addons.go:239] Setting addon default-storageclass=true in "test-preload-732316"
	W1210 23:24:23.456475   35456 addons.go:248] addon default-storageclass should already be in state true
	I1210 23:24:23.456501   35456 host.go:66] Checking if "test-preload-732316" exists ...
	I1210 23:24:23.457174   35456 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:24:23.457180   35456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:24:23.458216   35456 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 23:24:23.458234   35456 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 23:24:23.459135   35456 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:24:23.459149   35456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 23:24:23.461265   35456 main.go:143] libmachine: domain test-preload-732316 has defined MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:23.461699   35456 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:6a:6b", ip: ""} in network mk-test-preload-732316: {Iface:virbr1 ExpiryTime:2025-12-11 00:24:05 +0000 UTC Type:0 Mac:52:54:00:02:6a:6b Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-732316 Clientid:01:52:54:00:02:6a:6b}
	I1210 23:24:23.461755   35456 main.go:143] libmachine: domain test-preload-732316 has defined IP address 192.168.39.175 and MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:23.461939   35456 main.go:143] libmachine: domain test-preload-732316 has defined MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:23.461950   35456 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/test-preload-732316/id_rsa Username:docker}
	I1210 23:24:23.462430   35456 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:6a:6b", ip: ""} in network mk-test-preload-732316: {Iface:virbr1 ExpiryTime:2025-12-11 00:24:05 +0000 UTC Type:0 Mac:52:54:00:02:6a:6b Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-732316 Clientid:01:52:54:00:02:6a:6b}
	I1210 23:24:23.462487   35456 main.go:143] libmachine: domain test-preload-732316 has defined IP address 192.168.39.175 and MAC address 52:54:00:02:6a:6b in network mk-test-preload-732316
	I1210 23:24:23.462673   35456 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/test-preload-732316/id_rsa Username:docker}
	I1210 23:24:23.706397   35456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:24:23.737183   35456 node_ready.go:35] waiting up to 6m0s for node "test-preload-732316" to be "Ready" ...
	I1210 23:24:23.869565   35456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 23:24:23.873132   35456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:24:24.552392   35456 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1210 23:24:24.554014   35456 addons.go:530] duration metric: took 1.100337163s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1210 23:24:25.741717   35456 node_ready.go:57] node "test-preload-732316" has "Ready":"False" status (will retry)
	W1210 23:24:28.241110   35456 node_ready.go:57] node "test-preload-732316" has "Ready":"False" status (will retry)
	W1210 23:24:30.241257   35456 node_ready.go:57] node "test-preload-732316" has "Ready":"False" status (will retry)
	I1210 23:24:32.241837   35456 node_ready.go:49] node "test-preload-732316" is "Ready"
	I1210 23:24:32.241878   35456 node_ready.go:38] duration metric: took 8.504630717s for node "test-preload-732316" to be "Ready" ...
	I1210 23:24:32.241899   35456 api_server.go:52] waiting for apiserver process to appear ...
	I1210 23:24:32.241959   35456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:24:32.263769   35456 api_server.go:72] duration metric: took 8.810127343s to wait for apiserver process to appear ...
	I1210 23:24:32.263797   35456 api_server.go:88] waiting for apiserver healthz status ...
	I1210 23:24:32.263813   35456 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I1210 23:24:32.269802   35456 api_server.go:279] https://192.168.39.175:8443/healthz returned 200:
	ok
	I1210 23:24:32.270787   35456 api_server.go:141] control plane version: v1.34.2
	I1210 23:24:32.270814   35456 api_server.go:131] duration metric: took 7.01001ms to wait for apiserver health ...
	I1210 23:24:32.270824   35456 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 23:24:32.275139   35456 system_pods.go:59] 7 kube-system pods found
	I1210 23:24:32.275171   35456 system_pods.go:61] "coredns-66bc5c9577-bxjql" [3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f] Running
	I1210 23:24:32.275179   35456 system_pods.go:61] "etcd-test-preload-732316" [057cfbfa-f0f6-42bc-b69f-8d2c9c264edd] Running
	I1210 23:24:32.275189   35456 system_pods.go:61] "kube-apiserver-test-preload-732316" [3f6f5d8f-0cd3-4b11-9cf7-f2a1f8de5d04] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 23:24:32.275197   35456 system_pods.go:61] "kube-controller-manager-test-preload-732316" [fabb61dd-81f0-4786-bba0-6a6fc2fb9f14] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 23:24:32.275205   35456 system_pods.go:61] "kube-proxy-5qj4j" [b5f77418-5e7d-4a0e-9d2e-cdec411d474b] Running
	I1210 23:24:32.275211   35456 system_pods.go:61] "kube-scheduler-test-preload-732316" [d253b2fb-2df8-4517-901b-507bd6de0645] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 23:24:32.275216   35456 system_pods.go:61] "storage-provisioner" [fa4fa3e3-df0f-41d0-9f59-21d973168ea0] Running
	I1210 23:24:32.275228   35456 system_pods.go:74] duration metric: took 4.396362ms to wait for pod list to return data ...
	I1210 23:24:32.275239   35456 default_sa.go:34] waiting for default service account to be created ...
	I1210 23:24:32.278027   35456 default_sa.go:45] found service account: "default"
	I1210 23:24:32.278061   35456 default_sa.go:55] duration metric: took 2.813506ms for default service account to be created ...
	I1210 23:24:32.278077   35456 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 23:24:32.281213   35456 system_pods.go:86] 7 kube-system pods found
	I1210 23:24:32.281245   35456 system_pods.go:89] "coredns-66bc5c9577-bxjql" [3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f] Running
	I1210 23:24:32.281253   35456 system_pods.go:89] "etcd-test-preload-732316" [057cfbfa-f0f6-42bc-b69f-8d2c9c264edd] Running
	I1210 23:24:32.281263   35456 system_pods.go:89] "kube-apiserver-test-preload-732316" [3f6f5d8f-0cd3-4b11-9cf7-f2a1f8de5d04] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 23:24:32.281273   35456 system_pods.go:89] "kube-controller-manager-test-preload-732316" [fabb61dd-81f0-4786-bba0-6a6fc2fb9f14] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 23:24:32.281284   35456 system_pods.go:89] "kube-proxy-5qj4j" [b5f77418-5e7d-4a0e-9d2e-cdec411d474b] Running
	I1210 23:24:32.281289   35456 system_pods.go:89] "kube-scheduler-test-preload-732316" [d253b2fb-2df8-4517-901b-507bd6de0645] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 23:24:32.281293   35456 system_pods.go:89] "storage-provisioner" [fa4fa3e3-df0f-41d0-9f59-21d973168ea0] Running
	I1210 23:24:32.281305   35456 system_pods.go:126] duration metric: took 3.220268ms to wait for k8s-apps to be running ...
	I1210 23:24:32.281315   35456 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 23:24:32.281365   35456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:24:32.298090   35456 system_svc.go:56] duration metric: took 16.76578ms WaitForService to wait for kubelet
	I1210 23:24:32.298129   35456 kubeadm.go:587] duration metric: took 8.844489228s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:24:32.298152   35456 node_conditions.go:102] verifying NodePressure condition ...
	I1210 23:24:32.301758   35456 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 23:24:32.301787   35456 node_conditions.go:123] node cpu capacity is 2
	I1210 23:24:32.301801   35456 node_conditions.go:105] duration metric: took 3.641847ms to run NodePressure ...
	I1210 23:24:32.301814   35456 start.go:242] waiting for startup goroutines ...
	I1210 23:24:32.301820   35456 start.go:247] waiting for cluster config update ...
	I1210 23:24:32.301834   35456 start.go:256] writing updated cluster config ...
	I1210 23:24:32.302203   35456 ssh_runner.go:195] Run: rm -f paused
	I1210 23:24:32.307657   35456 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:24:32.308154   35456 kapi.go:59] client config for test-preload-732316: &rest.Config{Host:"https://192.168.39.175:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22061-5125/.minikube/profiles/test-preload-732316/client.crt", KeyFile:"/home/jenkins/minikube-integration/22061-5125/.minikube/profiles/test-preload-732316/client.key", CAFile:"/home/jenkins/minikube-integration/22061-5125/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 23:24:32.312200   35456 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bxjql" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:24:32.318915   35456 pod_ready.go:94] pod "coredns-66bc5c9577-bxjql" is "Ready"
	I1210 23:24:32.318944   35456 pod_ready.go:86] duration metric: took 6.724746ms for pod "coredns-66bc5c9577-bxjql" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:24:32.321552   35456 pod_ready.go:83] waiting for pod "etcd-test-preload-732316" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:24:32.326497   35456 pod_ready.go:94] pod "etcd-test-preload-732316" is "Ready"
	I1210 23:24:32.326520   35456 pod_ready.go:86] duration metric: took 4.94615ms for pod "etcd-test-preload-732316" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:24:32.329542   35456 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-732316" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 23:24:34.334499   35456 pod_ready.go:104] pod "kube-apiserver-test-preload-732316" is not "Ready", error: <nil>
	I1210 23:24:36.335989   35456 pod_ready.go:94] pod "kube-apiserver-test-preload-732316" is "Ready"
	I1210 23:24:36.336019   35456 pod_ready.go:86] duration metric: took 4.006452009s for pod "kube-apiserver-test-preload-732316" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:24:36.338395   35456 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-732316" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:24:36.343160   35456 pod_ready.go:94] pod "kube-controller-manager-test-preload-732316" is "Ready"
	I1210 23:24:36.343188   35456 pod_ready.go:86] duration metric: took 4.764689ms for pod "kube-controller-manager-test-preload-732316" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:24:36.345139   35456 pod_ready.go:83] waiting for pod "kube-proxy-5qj4j" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:24:36.512063   35456 pod_ready.go:94] pod "kube-proxy-5qj4j" is "Ready"
	I1210 23:24:36.512092   35456 pod_ready.go:86] duration metric: took 166.925131ms for pod "kube-proxy-5qj4j" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:24:36.711158   35456 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-732316" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:24:37.511721   35456 pod_ready.go:94] pod "kube-scheduler-test-preload-732316" is "Ready"
	I1210 23:24:37.511760   35456 pod_ready.go:86] duration metric: took 800.579011ms for pod "kube-scheduler-test-preload-732316" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:24:37.511773   35456 pod_ready.go:40] duration metric: took 5.204079032s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:24:37.554581   35456 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 23:24:37.556379   35456 out.go:179] * Done! kubectl is now configured to use "test-preload-732316" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.330915491Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765409078330892423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3c086944-7d17-4dca-a81a-d687a2fc76ac name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.332108653Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d672f9f8-c75c-481f-ae5f-a1e1acc1ca4a name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.332483540Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d672f9f8-c75c-481f-ae5f-a1e1acc1ca4a name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.332767142Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fe0341a9dfa222119ece0f3f9b470cfce804b90c91e1e9e3d05bb44480f86ea7,PodSandboxId:a8d9fc904488870b812ebe84e12a82c36aae20fd988663dc5ad33d957163d4ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765409070528729623,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-bxjql,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:718050fae2c7f68f51caea82c798e7af19f070e4d53430529973f676730ce42b,PodSandboxId:6bee1bee577cf7fd24fe7c7651268ef6a6c51ece5c6b0256afbb51d5c7ad73c7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765409062954153588,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5qj4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5f77418-5e7d-4a0e-9d2e-cdec411d474b,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3822f2ab398bd2dda21008c55c0bc66384b2a8ccd0ec14e2bd3ccb1d044583f7,PodSandboxId:9e95abc807b8bd56e35d76308de08fb1db815bfed19ce497982c4bea7d849013,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765409062933032703,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa4fa3e3-df0f-41d0-9f59-21d973168ea0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7219f1f8cb03b66dcbda315dd8b6b3ed61659b0a323f3ef5492c75c44f0ce943,PodSandboxId:ce3d4d99a1723400c0bcd80fd0ca685f15efec5bfa3fda2adb557429e7d377a6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765409058340308682,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-732316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21d795c309cfbaf79226bb2698892b62,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d152fda481058ee37a72ba67973842f0c712a7fac08a4caf474f2049aada400d,PodSandboxId:38864d9e7fd2507d6724a887ad345c4ebf11837e24351a6025ad7f6bc7f6af86,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ff
b85,State:CONTAINER_RUNNING,CreatedAt:1765409058306620997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-732316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bdad580a96453547a13e6339a3d0672,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0914eb3c8f87198efc231c043515951cb7868a07a2d902f1d80908b82ef872c3,PodSandboxId:6e51925ecff87f9444db3e0db26d798920875dcc871e608f0683d5d40fa18948,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765409058270092856,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-732316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c97cc65957e77028ca0c8e090d8920e9,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caff9f5ab187880dc17a546475010046cd198f55fd2c26abf07c1f9e9fb0ddee,PodSandboxId:a54a19185a3ea358f0c4f5eed0faa47139fbc6f60c161e1a3f4d1804b5457f38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8ba
cf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765409058286768531,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-732316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02263f00277d19b6b008e4bb27bdfe0d,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d672f9f8-c75c-481f-ae5f-a1e1acc1ca4a name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.364938683Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e160a505-8a59-4b47-9f63-015027c9bef7 name=/runtime.v1.RuntimeService/Version
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.365013003Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e160a505-8a59-4b47-9f63-015027c9bef7 name=/runtime.v1.RuntimeService/Version
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.366257516Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dbd068ab-a205-4e97-b32d-db26f00de907 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.366698299Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765409078366676635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dbd068ab-a205-4e97-b32d-db26f00de907 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.367777728Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=efd5b4b0-da09-4443-a30e-83462b52361e name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.367833400Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=efd5b4b0-da09-4443-a30e-83462b52361e name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.367991449Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fe0341a9dfa222119ece0f3f9b470cfce804b90c91e1e9e3d05bb44480f86ea7,PodSandboxId:a8d9fc904488870b812ebe84e12a82c36aae20fd988663dc5ad33d957163d4ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765409070528729623,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-bxjql,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:718050fae2c7f68f51caea82c798e7af19f070e4d53430529973f676730ce42b,PodSandboxId:6bee1bee577cf7fd24fe7c7651268ef6a6c51ece5c6b0256afbb51d5c7ad73c7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765409062954153588,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5qj4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5f77418-5e7d-4a0e-9d2e-cdec411d474b,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3822f2ab398bd2dda21008c55c0bc66384b2a8ccd0ec14e2bd3ccb1d044583f7,PodSandboxId:9e95abc807b8bd56e35d76308de08fb1db815bfed19ce497982c4bea7d849013,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765409062933032703,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa4fa3e3-df0f-41d0-9f59-21d973168ea0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7219f1f8cb03b66dcbda315dd8b6b3ed61659b0a323f3ef5492c75c44f0ce943,PodSandboxId:ce3d4d99a1723400c0bcd80fd0ca685f15efec5bfa3fda2adb557429e7d377a6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765409058340308682,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-732316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21d795c309cfbaf79226bb2698892b62,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d152fda481058ee37a72ba67973842f0c712a7fac08a4caf474f2049aada400d,PodSandboxId:38864d9e7fd2507d6724a887ad345c4ebf11837e24351a6025ad7f6bc7f6af86,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ff
b85,State:CONTAINER_RUNNING,CreatedAt:1765409058306620997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-732316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bdad580a96453547a13e6339a3d0672,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0914eb3c8f87198efc231c043515951cb7868a07a2d902f1d80908b82ef872c3,PodSandboxId:6e51925ecff87f9444db3e0db26d798920875dcc871e608f0683d5d40fa18948,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765409058270092856,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-732316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c97cc65957e77028ca0c8e090d8920e9,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caff9f5ab187880dc17a546475010046cd198f55fd2c26abf07c1f9e9fb0ddee,PodSandboxId:a54a19185a3ea358f0c4f5eed0faa47139fbc6f60c161e1a3f4d1804b5457f38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8ba
cf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765409058286768531,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-732316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02263f00277d19b6b008e4bb27bdfe0d,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=efd5b4b0-da09-4443-a30e-83462b52361e name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.401873081Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db40a76f-a5d7-4e7a-9d05-c2cbc1d7d293 name=/runtime.v1.RuntimeService/Version
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.401967924Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db40a76f-a5d7-4e7a-9d05-c2cbc1d7d293 name=/runtime.v1.RuntimeService/Version
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.403151165Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8e1230ff-e09c-4f6d-9c0c-2a1138cbd6ac name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.403748427Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765409078403726109,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e1230ff-e09c-4f6d-9c0c-2a1138cbd6ac name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.404549353Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82871001-bec6-4477-8ed4-bfb3ea4fa996 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.404709641Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82871001-bec6-4477-8ed4-bfb3ea4fa996 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.405059414Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fe0341a9dfa222119ece0f3f9b470cfce804b90c91e1e9e3d05bb44480f86ea7,PodSandboxId:a8d9fc904488870b812ebe84e12a82c36aae20fd988663dc5ad33d957163d4ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765409070528729623,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-bxjql,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:718050fae2c7f68f51caea82c798e7af19f070e4d53430529973f676730ce42b,PodSandboxId:6bee1bee577cf7fd24fe7c7651268ef6a6c51ece5c6b0256afbb51d5c7ad73c7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765409062954153588,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5qj4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5f77418-5e7d-4a0e-9d2e-cdec411d474b,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3822f2ab398bd2dda21008c55c0bc66384b2a8ccd0ec14e2bd3ccb1d044583f7,PodSandboxId:9e95abc807b8bd56e35d76308de08fb1db815bfed19ce497982c4bea7d849013,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765409062933032703,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa4fa3e3-df0f-41d0-9f59-21d973168ea0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7219f1f8cb03b66dcbda315dd8b6b3ed61659b0a323f3ef5492c75c44f0ce943,PodSandboxId:ce3d4d99a1723400c0bcd80fd0ca685f15efec5bfa3fda2adb557429e7d377a6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765409058340308682,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-732316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21d795c309cfbaf79226bb2698892b62,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d152fda481058ee37a72ba67973842f0c712a7fac08a4caf474f2049aada400d,PodSandboxId:38864d9e7fd2507d6724a887ad345c4ebf11837e24351a6025ad7f6bc7f6af86,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ff
b85,State:CONTAINER_RUNNING,CreatedAt:1765409058306620997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-732316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bdad580a96453547a13e6339a3d0672,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0914eb3c8f87198efc231c043515951cb7868a07a2d902f1d80908b82ef872c3,PodSandboxId:6e51925ecff87f9444db3e0db26d798920875dcc871e608f0683d5d40fa18948,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765409058270092856,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-732316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c97cc65957e77028ca0c8e090d8920e9,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caff9f5ab187880dc17a546475010046cd198f55fd2c26abf07c1f9e9fb0ddee,PodSandboxId:a54a19185a3ea358f0c4f5eed0faa47139fbc6f60c161e1a3f4d1804b5457f38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8ba
cf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765409058286768531,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-732316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02263f00277d19b6b008e4bb27bdfe0d,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82871001-bec6-4477-8ed4-bfb3ea4fa996 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.434656032Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ea574aad-d141-4624-b4af-51fe9fc359d7 name=/runtime.v1.RuntimeService/Version
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.434754513Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ea574aad-d141-4624-b4af-51fe9fc359d7 name=/runtime.v1.RuntimeService/Version
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.436455348Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=19623e51-596c-4a0e-877f-619dcbc86631 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.437344242Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765409078437278356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19623e51-596c-4a0e-877f-619dcbc86631 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.438463418Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=207eda44-daa3-4989-855c-15ad861c2097 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.438528335Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=207eda44-daa3-4989-855c-15ad861c2097 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 23:24:38 test-preload-732316 crio[836]: time="2025-12-10 23:24:38.438714139Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fe0341a9dfa222119ece0f3f9b470cfce804b90c91e1e9e3d05bb44480f86ea7,PodSandboxId:a8d9fc904488870b812ebe84e12a82c36aae20fd988663dc5ad33d957163d4ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765409070528729623,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-bxjql,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:718050fae2c7f68f51caea82c798e7af19f070e4d53430529973f676730ce42b,PodSandboxId:6bee1bee577cf7fd24fe7c7651268ef6a6c51ece5c6b0256afbb51d5c7ad73c7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765409062954153588,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5qj4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5f77418-5e7d-4a0e-9d2e-cdec411d474b,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3822f2ab398bd2dda21008c55c0bc66384b2a8ccd0ec14e2bd3ccb1d044583f7,PodSandboxId:9e95abc807b8bd56e35d76308de08fb1db815bfed19ce497982c4bea7d849013,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765409062933032703,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa4fa3e3-df0f-41d0-9f59-21d973168ea0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7219f1f8cb03b66dcbda315dd8b6b3ed61659b0a323f3ef5492c75c44f0ce943,PodSandboxId:ce3d4d99a1723400c0bcd80fd0ca685f15efec5bfa3fda2adb557429e7d377a6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765409058340308682,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-732316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21d795c309cfbaf79226bb2698892b62,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d152fda481058ee37a72ba67973842f0c712a7fac08a4caf474f2049aada400d,PodSandboxId:38864d9e7fd2507d6724a887ad345c4ebf11837e24351a6025ad7f6bc7f6af86,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ff
b85,State:CONTAINER_RUNNING,CreatedAt:1765409058306620997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-732316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bdad580a96453547a13e6339a3d0672,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0914eb3c8f87198efc231c043515951cb7868a07a2d902f1d80908b82ef872c3,PodSandboxId:6e51925ecff87f9444db3e0db26d798920875dcc871e608f0683d5d40fa18948,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765409058270092856,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-732316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c97cc65957e77028ca0c8e090d8920e9,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caff9f5ab187880dc17a546475010046cd198f55fd2c26abf07c1f9e9fb0ddee,PodSandboxId:a54a19185a3ea358f0c4f5eed0faa47139fbc6f60c161e1a3f4d1804b5457f38,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8ba
cf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765409058286768531,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-732316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02263f00277d19b6b008e4bb27bdfe0d,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=207eda44-daa3-4989-855c-15ad861c2097 name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	fe0341a9dfa22       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   7 seconds ago       Running             coredns                   1                   a8d9fc9044888       coredns-66bc5c9577-bxjql                      kube-system
	718050fae2c7f       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   15 seconds ago      Running             kube-proxy                1                   6bee1bee577cf       kube-proxy-5qj4j                              kube-system
	3822f2ab398bd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       1                   9e95abc807b8b       storage-provisioner                           kube-system
	7219f1f8cb03b       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   20 seconds ago      Running             kube-scheduler            1                   ce3d4d99a1723       kube-scheduler-test-preload-732316            kube-system
	d152fda481058       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   20 seconds ago      Running             kube-apiserver            1                   38864d9e7fd25       kube-apiserver-test-preload-732316            kube-system
	caff9f5ab1878       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   20 seconds ago      Running             kube-controller-manager   1                   a54a19185a3ea       kube-controller-manager-test-preload-732316   kube-system
	0914eb3c8f871       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   20 seconds ago      Running             etcd                      1                   6e51925ecff87       etcd-test-preload-732316                      kube-system
	
	
	==> coredns [fe0341a9dfa222119ece0f3f9b470cfce804b90c91e1e9e3d05bb44480f86ea7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45267 - 50205 "HINFO IN 9222051680476858449.9195395679745406047. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.06120317s
	
	
	==> describe nodes <==
	Name:               test-preload-732316
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-732316
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=test-preload-732316
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T23_22_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 23:22:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-732316
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 23:24:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 23:24:31 +0000   Wed, 10 Dec 2025 23:22:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 23:24:31 +0000   Wed, 10 Dec 2025 23:22:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 23:24:31 +0000   Wed, 10 Dec 2025 23:22:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 23:24:31 +0000   Wed, 10 Dec 2025 23:24:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.175
	  Hostname:    test-preload-732316
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 1640882051524de2848caba5736b632e
	  System UUID:                16408820-5152-4de2-848c-aba5736b632e
	  Boot ID:                    a62b8a21-aad3-41d3-9b5b-29d1f705fa52
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-bxjql                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     96s
	  kube-system                 etcd-test-preload-732316                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         103s
	  kube-system                 kube-apiserver-test-preload-732316             250m (12%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-controller-manager-test-preload-732316    200m (10%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-proxy-5qj4j                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-scheduler-test-preload-732316             100m (5%)     0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 94s                  kube-proxy       
	  Normal   Starting                 15s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  108s (x8 over 108s)  kubelet          Node test-preload-732316 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    108s (x8 over 108s)  kubelet          Node test-preload-732316 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     108s (x7 over 108s)  kubelet          Node test-preload-732316 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  108s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     101s                 kubelet          Node test-preload-732316 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  101s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  101s                 kubelet          Node test-preload-732316 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    101s                 kubelet          Node test-preload-732316 status is now: NodeHasNoDiskPressure
	  Normal   NodeReady                101s                 kubelet          Node test-preload-732316 status is now: NodeReady
	  Normal   Starting                 101s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           97s                  node-controller  Node test-preload-732316 event: Registered Node test-preload-732316 in Controller
	  Normal   Starting                 22s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node test-preload-732316 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node test-preload-732316 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node test-preload-732316 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 17s                  kubelet          Node test-preload-732316 has been rebooted, boot id: a62b8a21-aad3-41d3-9b5b-29d1f705fa52
	  Normal   RegisteredNode           14s                  node-controller  Node test-preload-732316 event: Registered Node test-preload-732316 in Controller
	
	
	==> dmesg <==
	[Dec10 23:23] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[Dec10 23:24] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000010] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.992146] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.100969] kauditd_printk_skb: 88 callbacks suppressed
	[  +6.583533] kauditd_printk_skb: 196 callbacks suppressed
	[  +0.000092] kauditd_printk_skb: 128 callbacks suppressed
	[  +0.025688] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [0914eb3c8f87198efc231c043515951cb7868a07a2d902f1d80908b82ef872c3] <==
	{"level":"warn","ts":"2025-12-10T23:24:20.477117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:24:20.494502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:24:20.513665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:24:20.525305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:24:20.546727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:24:20.551867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:24:20.567877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:24:20.598127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:24:20.606778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:24:20.632791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:24:20.632956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:24:20.655227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:24:20.666464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:24:20.673667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:24:20.683805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:24:20.696739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:24:20.711234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:24:20.715426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:24:20.726302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:24:20.740560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:24:20.757831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:24:20.766709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:24:20.778652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:24:20.791890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:24:20.894312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40076","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:24:38 up 0 min,  0 users,  load average: 1.20, 0.31, 0.10
	Linux test-preload-732316 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [d152fda481058ee37a72ba67973842f0c712a7fac08a4caf474f2049aada400d] <==
	I1210 23:24:21.690439       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 23:24:21.696459       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 23:24:21.697392       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1210 23:24:21.697480       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1210 23:24:21.697501       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1210 23:24:21.708457       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1210 23:24:21.708552       1 policy_source.go:240] refreshing policies
	I1210 23:24:21.710287       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1210 23:24:21.690023       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 23:24:21.710443       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1210 23:24:21.710480       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1210 23:24:21.710727       1 aggregator.go:171] initial CRD sync complete...
	I1210 23:24:21.710762       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 23:24:21.710784       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 23:24:21.710799       1 cache.go:39] Caches are synced for autoregister controller
	I1210 23:24:21.754495       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 23:24:22.479267       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 23:24:22.492751       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 23:24:23.269370       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 23:24:23.317456       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 23:24:23.354619       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 23:24:23.367467       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 23:24:25.136149       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 23:24:25.334137       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 23:24:25.386725       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [caff9f5ab187880dc17a546475010046cd198f55fd2c26abf07c1f9e9fb0ddee] <==
	I1210 23:24:24.997185       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1210 23:24:25.004696       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1210 23:24:25.005617       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1210 23:24:25.005746       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 23:24:25.015267       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1210 23:24:25.023741       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 23:24:25.023327       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 23:24:25.031742       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1210 23:24:25.031785       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 23:24:25.032010       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 23:24:25.032194       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1210 23:24:25.032240       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1210 23:24:25.032394       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1210 23:24:25.032571       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1210 23:24:25.034670       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1210 23:24:25.034709       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 23:24:25.040325       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 23:24:25.044880       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1210 23:24:25.044846       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1210 23:24:25.046121       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1210 23:24:25.046186       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1210 23:24:25.053608       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 23:24:25.053636       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 23:24:25.053642       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 23:24:34.982510       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [718050fae2c7f68f51caea82c798e7af19f070e4d53430529973f676730ce42b] <==
	I1210 23:24:23.227879       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 23:24:23.329351       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 23:24:23.329402       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.175"]
	E1210 23:24:23.329470       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 23:24:23.386093       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1210 23:24:23.386204       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 23:24:23.386245       1 server_linux.go:132] "Using iptables Proxier"
	I1210 23:24:23.395222       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 23:24:23.395523       1 server.go:527] "Version info" version="v1.34.2"
	I1210 23:24:23.395539       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:24:23.399895       1 config.go:200] "Starting service config controller"
	I1210 23:24:23.399917       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 23:24:23.399932       1 config.go:106] "Starting endpoint slice config controller"
	I1210 23:24:23.399935       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 23:24:23.399944       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 23:24:23.399947       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 23:24:23.400503       1 config.go:309] "Starting node config controller"
	I1210 23:24:23.400529       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 23:24:23.400536       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 23:24:23.500675       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 23:24:23.500766       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 23:24:23.500781       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7219f1f8cb03b66dcbda315dd8b6b3ed61659b0a323f3ef5492c75c44f0ce943] <==
	I1210 23:24:20.410533       1 serving.go:386] Generated self-signed cert in-memory
	W1210 23:24:21.503516       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 23:24:21.503633       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 23:24:21.503647       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 23:24:21.503653       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 23:24:21.670638       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1210 23:24:21.672441       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:24:21.675791       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 23:24:21.675866       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 23:24:21.677567       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 23:24:21.677633       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 23:24:21.776243       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 23:24:21 test-preload-732316 kubelet[1185]: I1210 23:24:21.785393    1185 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 10 23:24:21 test-preload-732316 kubelet[1185]: I1210 23:24:21.786546    1185 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 10 23:24:21 test-preload-732316 kubelet[1185]: I1210 23:24:21.788089    1185 setters.go:543] "Node became not ready" node="test-preload-732316" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T23:24:21Z","lastTransitionTime":"2025-12-10T23:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Dec 10 23:24:22 test-preload-732316 kubelet[1185]: I1210 23:24:22.425410    1185 apiserver.go:52] "Watching apiserver"
	Dec 10 23:24:22 test-preload-732316 kubelet[1185]: E1210 23:24:22.431248    1185 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-bxjql" podUID="3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f"
	Dec 10 23:24:22 test-preload-732316 kubelet[1185]: I1210 23:24:22.461870    1185 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 10 23:24:22 test-preload-732316 kubelet[1185]: I1210 23:24:22.471686    1185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5f77418-5e7d-4a0e-9d2e-cdec411d474b-lib-modules\") pod \"kube-proxy-5qj4j\" (UID: \"b5f77418-5e7d-4a0e-9d2e-cdec411d474b\") " pod="kube-system/kube-proxy-5qj4j"
	Dec 10 23:24:22 test-preload-732316 kubelet[1185]: I1210 23:24:22.471919    1185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fa4fa3e3-df0f-41d0-9f59-21d973168ea0-tmp\") pod \"storage-provisioner\" (UID: \"fa4fa3e3-df0f-41d0-9f59-21d973168ea0\") " pod="kube-system/storage-provisioner"
	Dec 10 23:24:22 test-preload-732316 kubelet[1185]: I1210 23:24:22.471962    1185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5f77418-5e7d-4a0e-9d2e-cdec411d474b-xtables-lock\") pod \"kube-proxy-5qj4j\" (UID: \"b5f77418-5e7d-4a0e-9d2e-cdec411d474b\") " pod="kube-system/kube-proxy-5qj4j"
	Dec 10 23:24:22 test-preload-732316 kubelet[1185]: E1210 23:24:22.474125    1185 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 10 23:24:22 test-preload-732316 kubelet[1185]: E1210 23:24:22.475231    1185 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f-config-volume podName:3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f nodeName:}" failed. No retries permitted until 2025-12-10 23:24:22.974547668 +0000 UTC m=+6.638081102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f-config-volume") pod "coredns-66bc5c9577-bxjql" (UID: "3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f") : object "kube-system"/"coredns" not registered
	Dec 10 23:24:22 test-preload-732316 kubelet[1185]: I1210 23:24:22.592644    1185 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-732316"
	Dec 10 23:24:22 test-preload-732316 kubelet[1185]: E1210 23:24:22.607618    1185 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-test-preload-732316\" already exists" pod="kube-system/etcd-test-preload-732316"
	Dec 10 23:24:22 test-preload-732316 kubelet[1185]: E1210 23:24:22.976591    1185 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 10 23:24:22 test-preload-732316 kubelet[1185]: E1210 23:24:22.976708    1185 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f-config-volume podName:3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f nodeName:}" failed. No retries permitted until 2025-12-10 23:24:23.976692905 +0000 UTC m=+7.640226338 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f-config-volume") pod "coredns-66bc5c9577-bxjql" (UID: "3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f") : object "kube-system"/"coredns" not registered
	Dec 10 23:24:23 test-preload-732316 kubelet[1185]: E1210 23:24:23.986687    1185 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 10 23:24:23 test-preload-732316 kubelet[1185]: E1210 23:24:23.986758    1185 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f-config-volume podName:3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f nodeName:}" failed. No retries permitted until 2025-12-10 23:24:25.986744719 +0000 UTC m=+9.650278164 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f-config-volume") pod "coredns-66bc5c9577-bxjql" (UID: "3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f") : object "kube-system"/"coredns" not registered
	Dec 10 23:24:24 test-preload-732316 kubelet[1185]: E1210 23:24:24.489032    1185 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-bxjql" podUID="3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f"
	Dec 10 23:24:26 test-preload-732316 kubelet[1185]: E1210 23:24:26.000199    1185 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 10 23:24:26 test-preload-732316 kubelet[1185]: E1210 23:24:26.000315    1185 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f-config-volume podName:3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f nodeName:}" failed. No retries permitted until 2025-12-10 23:24:30.000292414 +0000 UTC m=+13.663825850 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f-config-volume") pod "coredns-66bc5c9577-bxjql" (UID: "3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f") : object "kube-system"/"coredns" not registered
	Dec 10 23:24:26 test-preload-732316 kubelet[1185]: E1210 23:24:26.488241    1185 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-bxjql" podUID="3cdfbad8-9f5c-47c6-b15b-8c0e48cb696f"
	Dec 10 23:24:26 test-preload-732316 kubelet[1185]: E1210 23:24:26.529891    1185 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765409066529374653 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 10 23:24:26 test-preload-732316 kubelet[1185]: E1210 23:24:26.529918    1185 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765409066529374653 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 10 23:24:36 test-preload-732316 kubelet[1185]: E1210 23:24:36.532716    1185 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765409076532050251 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 10 23:24:36 test-preload-732316 kubelet[1185]: E1210 23:24:36.532760    1185 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765409076532050251 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	
	
	==> storage-provisioner [3822f2ab398bd2dda21008c55c0bc66384b2a8ccd0ec14e2bd3ccb1d044583f7] <==
	I1210 23:24:23.060053       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-732316 -n test-preload-732316
helpers_test.go:270: (dbg) Run:  kubectl --context test-preload-732316 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:176: Cleaning up "test-preload-732316" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-732316
--- FAIL: TestPreload (149.05s)

                                                
                                    

Test pass (382/437)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 22.66
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 9.89
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.07
18 TestDownloadOnly/v1.34.2/DeleteAll 0.15
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 10.31
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.16
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.64
31 TestOffline 85.27
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 125.87
40 TestAddons/serial/GCPAuth/Namespaces 0.18
41 TestAddons/serial/GCPAuth/FakeCredentials 11.51
44 TestAddons/parallel/Registry 18.28
45 TestAddons/parallel/RegistryCreds 0.76
47 TestAddons/parallel/InspektorGadget 11.74
48 TestAddons/parallel/MetricsServer 7.41
50 TestAddons/parallel/CSI 46.93
51 TestAddons/parallel/Headlamp 19.87
52 TestAddons/parallel/CloudSpanner 5.57
53 TestAddons/parallel/LocalPath 57.88
54 TestAddons/parallel/NvidiaDevicePlugin 6.87
55 TestAddons/parallel/Yakd 12.26
57 TestAddons/StoppedEnableDisable 80.07
58 TestCertOptions 52.56
59 TestCertExpiration 298.67
61 TestForceSystemdFlag 40.52
62 TestForceSystemdEnv 75.24
67 TestErrorSpam/setup 40.56
68 TestErrorSpam/start 0.34
69 TestErrorSpam/status 0.65
70 TestErrorSpam/pause 1.51
71 TestErrorSpam/unpause 1.73
72 TestErrorSpam/stop 76.11
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 51.06
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 31.71
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.08
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.32
84 TestFunctional/serial/CacheCmd/cache/add_local 2.13
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.53
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.11
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
92 TestFunctional/serial/ExtraConfig 39.45
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.3
95 TestFunctional/serial/LogsFileCmd 1.35
96 TestFunctional/serial/InvalidService 4.35
98 TestFunctional/parallel/ConfigCmd 0.42
99 TestFunctional/parallel/DashboardCmd 47.48
100 TestFunctional/parallel/DryRun 0.24
101 TestFunctional/parallel/InternationalLanguage 0.12
102 TestFunctional/parallel/StatusCmd 1.09
106 TestFunctional/parallel/ServiceCmdConnect 10.55
107 TestFunctional/parallel/AddonsCmd 0.16
108 TestFunctional/parallel/PersistentVolumeClaim 37.85
110 TestFunctional/parallel/SSHCmd 0.32
111 TestFunctional/parallel/CpCmd 1.13
112 TestFunctional/parallel/MySQL 31.63
113 TestFunctional/parallel/FileSync 0.22
114 TestFunctional/parallel/CertSync 1.11
118 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.31
122 TestFunctional/parallel/License 0.38
123 TestFunctional/parallel/ServiceCmd/DeployApp 9.23
133 TestFunctional/parallel/Version/short 0.07
134 TestFunctional/parallel/Version/components 1.08
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.41
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.47
139 TestFunctional/parallel/ImageCommands/ImageBuild 7.2
140 TestFunctional/parallel/ImageCommands/Setup 1.78
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.21
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.84
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.66
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.44
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.6
147 TestFunctional/parallel/ServiceCmd/List 0.23
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
149 TestFunctional/parallel/ServiceCmd/JSONOutput 0.24
150 TestFunctional/parallel/ServiceCmd/HTTPS 0.25
151 TestFunctional/parallel/ServiceCmd/Format 0.24
152 TestFunctional/parallel/ServiceCmd/URL 0.24
153 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
154 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
155 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
156 TestFunctional/parallel/ProfileCmd/profile_not_create 0.58
157 TestFunctional/parallel/MountCmd/any-port 26.38
158 TestFunctional/parallel/ProfileCmd/profile_list 0.42
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
160 TestFunctional/parallel/MountCmd/specific-port 1.63
161 TestFunctional/parallel/MountCmd/VerifyCleanup 1.35
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 76.97
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 52.5
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.13
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.34
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 2.05
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.18
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.55
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.11
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 37.63
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.3
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.3
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.32
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.41
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.23
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.11
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.76
199 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 10.44
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.16
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 37.81
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.33
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.22
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 41.71
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.17
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.03
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.07
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.35
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.32
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.07
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.84
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.22
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.33
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.29
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.2
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 7.6
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.85
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.37
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.08
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.09
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.07
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.31
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.37
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.39
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 8.17
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.9
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.66
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.49
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.45
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.66
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.56
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.4
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 28.3
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.42
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.23
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.26
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.31
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.39
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.65
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
261 TestMultiControlPlane/serial/StartCluster 215.71
262 TestMultiControlPlane/serial/DeployApp 7.97
263 TestMultiControlPlane/serial/PingHostFromPods 1.29
264 TestMultiControlPlane/serial/AddWorkerNode 45.38
265 TestMultiControlPlane/serial/NodeLabels 0.07
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.67
267 TestMultiControlPlane/serial/CopyFile 10.72
268 TestMultiControlPlane/serial/StopSecondaryNode 87.17
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.5
270 TestMultiControlPlane/serial/RestartSecondaryNode 32.1
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.89
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 293.83
273 TestMultiControlPlane/serial/DeleteSecondaryNode 17.98
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.51
275 TestMultiControlPlane/serial/StopCluster 174.62
276 TestMultiControlPlane/serial/RestartCluster 99.38
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.51
278 TestMultiControlPlane/serial/AddSecondaryNode 98.61
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.68
284 TestJSONOutput/start/Command 51.1
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.71
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.64
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 6.86
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.22
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 79.34
316 TestMountStart/serial/StartWithMountFirst 20.69
317 TestMountStart/serial/VerifyMountFirst 0.31
318 TestMountStart/serial/StartWithMountSecond 20.47
319 TestMountStart/serial/VerifyMountSecond 0.3
320 TestMountStart/serial/DeleteFirst 0.68
321 TestMountStart/serial/VerifyMountPostDelete 0.31
322 TestMountStart/serial/Stop 1.28
323 TestMountStart/serial/RestartStopped 18.59
324 TestMountStart/serial/VerifyMountPostStop 0.3
327 TestMultiNode/serial/FreshStart2Nodes 129.88
328 TestMultiNode/serial/DeployApp2Nodes 5.96
329 TestMultiNode/serial/PingHostFrom2Pods 0.83
330 TestMultiNode/serial/AddNode 41.86
331 TestMultiNode/serial/MultiNodeLabels 0.06
332 TestMultiNode/serial/ProfileList 0.45
333 TestMultiNode/serial/CopyFile 5.97
334 TestMultiNode/serial/StopNode 2.36
335 TestMultiNode/serial/StartAfterStop 37.84
336 TestMultiNode/serial/RestartKeepsNodes 290.47
337 TestMultiNode/serial/DeleteNode 2.62
338 TestMultiNode/serial/StopMultiNode 165.96
339 TestMultiNode/serial/RestartMultiNode 93.03
340 TestMultiNode/serial/ValidateNameConflict 41.79
347 TestScheduledStopUnix 107.5
351 TestRunningBinaryUpgrade 372.13
353 TestKubernetesUpgrade 148.75
356 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
357 TestNoKubernetes/serial/StartWithK8s 95.07
358 TestStoppedBinaryUpgrade/Setup 3.6
359 TestStoppedBinaryUpgrade/Upgrade 113.48
360 TestNoKubernetes/serial/StartWithStopK8s 5.96
361 TestNoKubernetes/serial/Start 19.58
362 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
363 TestNoKubernetes/serial/VerifyK8sNotRunning 0.16
364 TestNoKubernetes/serial/ProfileList 0.78
365 TestNoKubernetes/serial/Stop 1.35
366 TestNoKubernetes/serial/StartNoArgs 49.38
367 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
368 TestStoppedBinaryUpgrade/MinikubeLogs 1.42
377 TestPause/serial/Start 93.2
378 TestPause/serial/SecondStartNoReconfiguration 37.74
386 TestNetworkPlugins/group/false 4.82
390 TestISOImage/Setup 19.56
392 TestStartStop/group/old-k8s-version/serial/FirstStart 67.15
394 TestISOImage/Binaries/crictl 0.17
395 TestISOImage/Binaries/curl 0.17
396 TestISOImage/Binaries/docker 0.18
397 TestISOImage/Binaries/git 0.17
398 TestISOImage/Binaries/iptables 0.16
399 TestISOImage/Binaries/podman 0.16
400 TestISOImage/Binaries/rsync 0.17
401 TestISOImage/Binaries/socat 0.18
402 TestISOImage/Binaries/wget 0.17
403 TestISOImage/Binaries/VBoxControl 0.17
404 TestISOImage/Binaries/VBoxService 0.17
406 TestStartStop/group/no-preload/serial/FirstStart 113.89
407 TestPause/serial/Pause 0.72
408 TestPause/serial/VerifyStatus 0.21
409 TestPause/serial/Unpause 0.63
410 TestPause/serial/PauseAgain 0.82
411 TestPause/serial/DeletePaused 0.83
412 TestPause/serial/VerifyDeletedResources 14.83
414 TestStartStop/group/embed-certs/serial/FirstStart 101.19
415 TestStartStop/group/old-k8s-version/serial/DeployApp 11.37
416 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.78
417 TestStartStop/group/old-k8s-version/serial/Stop 88.49
418 TestStartStop/group/no-preload/serial/DeployApp 10.32
419 TestStartStop/group/embed-certs/serial/DeployApp 11.32
420 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.15
421 TestStartStop/group/no-preload/serial/Stop 84.93
422 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
423 TestStartStop/group/embed-certs/serial/Stop 87.37
424 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.14
425 TestStartStop/group/old-k8s-version/serial/SecondStart 38.57
426 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 18.01
427 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.16
428 TestStartStop/group/no-preload/serial/SecondStart 52.07
429 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.08
431 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 90.31
432 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
433 TestStartStop/group/old-k8s-version/serial/Pause 2.69
435 TestStartStop/group/newest-cni/serial/FirstStart 63.56
436 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.15
437 TestStartStop/group/embed-certs/serial/SecondStart 86.82
438 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12
439 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
440 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
441 TestStartStop/group/no-preload/serial/Pause 2.85
442 TestNetworkPlugins/group/auto/Start 60.25
443 TestStartStop/group/newest-cni/serial/DeployApp 0
444 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.14
445 TestStartStop/group/newest-cni/serial/Stop 7.21
446 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
447 TestStartStop/group/newest-cni/serial/SecondStart 42.75
448 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.36
449 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
450 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.2
451 TestStartStop/group/default-k8s-diff-port/serial/Stop 85.47
452 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
453 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
454 TestStartStop/group/embed-certs/serial/Pause 2.93
455 TestNetworkPlugins/group/kindnet/Start 60.33
456 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
457 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
458 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
459 TestStartStop/group/newest-cni/serial/Pause 2.29
460 TestNetworkPlugins/group/calico/Start 95.83
461 TestNetworkPlugins/group/auto/KubeletFlags 0.16
462 TestNetworkPlugins/group/auto/NetCatPod 11.49
463 TestNetworkPlugins/group/auto/DNS 0.16
464 TestNetworkPlugins/group/auto/Localhost 0.19
465 TestNetworkPlugins/group/auto/HairPin 0.13
466 TestNetworkPlugins/group/custom-flannel/Start 70.77
467 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
468 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
469 TestNetworkPlugins/group/kindnet/NetCatPod 12.33
470 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
471 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 45.38
472 TestNetworkPlugins/group/kindnet/DNS 0.18
473 TestNetworkPlugins/group/kindnet/Localhost 0.19
474 TestNetworkPlugins/group/kindnet/HairPin 0.16
475 TestNetworkPlugins/group/enable-default-cni/Start 82.33
476 TestNetworkPlugins/group/calico/ControllerPod 6.01
477 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
478 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.4
479 TestNetworkPlugins/group/calico/KubeletFlags 0.23
480 TestNetworkPlugins/group/calico/NetCatPod 13.35
481 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 11.01
482 TestNetworkPlugins/group/custom-flannel/DNS 0.37
483 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
484 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
485 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.1
486 TestNetworkPlugins/group/calico/DNS 0.19
487 TestNetworkPlugins/group/calico/Localhost 0.17
488 TestNetworkPlugins/group/calico/HairPin 0.16
489 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
490 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.12
491 TestNetworkPlugins/group/flannel/Start 66.42
492 TestNetworkPlugins/group/bridge/Start 95.56
494 TestISOImage/PersistentMounts//data 0.18
495 TestISOImage/PersistentMounts//var/lib/docker 0.2
496 TestISOImage/PersistentMounts//var/lib/cni 0.17
497 TestISOImage/PersistentMounts//var/lib/kubelet 0.17
498 TestISOImage/PersistentMounts//var/lib/minikube 0.17
499 TestISOImage/PersistentMounts//var/lib/toolbox 0.17
500 TestISOImage/PersistentMounts//var/lib/boot2docker 0.18
501 TestISOImage/VersionJSON 0.18
502 TestISOImage/eBPFSupport 0.17
503 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
504 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.58
505 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
506 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
507 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
508 TestNetworkPlugins/group/flannel/ControllerPod 6.01
509 TestNetworkPlugins/group/flannel/KubeletFlags 0.17
510 TestNetworkPlugins/group/flannel/NetCatPod 9.22
511 TestNetworkPlugins/group/flannel/DNS 0.15
512 TestNetworkPlugins/group/flannel/Localhost 0.11
513 TestNetworkPlugins/group/flannel/HairPin 0.18
514 TestNetworkPlugins/group/bridge/KubeletFlags 0.18
515 TestNetworkPlugins/group/bridge/NetCatPod 10.23
516 TestNetworkPlugins/group/bridge/DNS 0.14
517 TestNetworkPlugins/group/bridge/Localhost 0.12
518 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (22.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-261584 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-261584 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (22.660612004s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (22.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1210 22:26:09.139234    9065 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1210 22:26:09.139318    9065 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-261584
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-261584: exit status 85 (70.536976ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-261584 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-261584 │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 22:25:46
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 22:25:46.530917    9077 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:25:46.531202    9077 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:25:46.531212    9077 out.go:374] Setting ErrFile to fd 2...
	I1210 22:25:46.531216    9077 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:25:46.531393    9077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
	W1210 22:25:46.531537    9077 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22061-5125/.minikube/config/config.json: open /home/jenkins/minikube-integration/22061-5125/.minikube/config/config.json: no such file or directory
	I1210 22:25:46.532041    9077 out.go:368] Setting JSON to true
	I1210 22:25:46.533047    9077 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":488,"bootTime":1765405059,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 22:25:46.533106    9077 start.go:143] virtualization: kvm guest
	I1210 22:25:46.537800    9077 out.go:99] [download-only-261584] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1210 22:25:46.537947    9077 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22061-5125/.minikube/cache/preloaded-tarball: no such file or directory
	I1210 22:25:46.537993    9077 notify.go:221] Checking for updates...
	I1210 22:25:46.539300    9077 out.go:171] MINIKUBE_LOCATION=22061
	I1210 22:25:46.540926    9077 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 22:25:46.542421    9077 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22061-5125/kubeconfig
	I1210 22:25:46.543899    9077 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5125/.minikube
	I1210 22:25:46.545404    9077 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1210 22:25:46.547855    9077 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 22:25:46.548121    9077 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 22:25:47.050094    9077 out.go:99] Using the kvm2 driver based on user configuration
	I1210 22:25:47.050120    9077 start.go:309] selected driver: kvm2
	I1210 22:25:47.050126    9077 start.go:927] validating driver "kvm2" against <nil>
	I1210 22:25:47.050474    9077 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 22:25:47.050993    9077 start_flags.go:425] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1210 22:25:47.051153    9077 start_flags.go:1113] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 22:25:47.051186    9077 cni.go:84] Creating CNI manager for ""
	I1210 22:25:47.051237    9077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 22:25:47.051246    9077 start_flags.go:351] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 22:25:47.051304    9077 start.go:353] cluster config:
	{Name:download-only-261584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-261584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DN
SDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketV
MnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 22:25:47.051512    9077 iso.go:125] acquiring lock: {Name:mk1091e707b59a200dfce77f9e85a41a0a31058c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 22:25:47.053120    9077 out.go:99] Downloading VM boot image ...
	I1210 22:25:47.053154    9077 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22061-5125/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
	I1210 22:25:56.958390    9077 out.go:99] Starting "download-only-261584" primary control-plane node in "download-only-261584" cluster
	I1210 22:25:56.958431    9077 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1210 22:25:57.050046    9077 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1210 22:25:57.050077    9077 cache.go:65] Caching tarball of preloaded images
	I1210 22:25:57.050257    9077 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1210 22:25:57.052203    9077 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1210 22:25:57.052220    9077 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1210 22:25:57.147596    9077 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1210 22:25:57.147741    9077 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22061-5125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-261584 host does not exist
	  To start a cluster, run: "minikube start -p download-only-261584"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-261584
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (9.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-722128 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-722128 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.885017682s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (9.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1210 22:26:19.396192    9065 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1210 22:26:19.396228    9065 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-722128
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-722128: exit status 85 (69.355989ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-261584 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-261584 │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 22:26 UTC │ 10 Dec 25 22:26 UTC │
	│ delete  │ -p download-only-261584                                                                                                                                                 │ download-only-261584 │ jenkins │ v1.37.0 │ 10 Dec 25 22:26 UTC │ 10 Dec 25 22:26 UTC │
	│ start   │ -o=json --download-only -p download-only-722128 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-722128 │ jenkins │ v1.37.0 │ 10 Dec 25 22:26 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 22:26:09
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 22:26:09.563625    9316 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:26:09.563748    9316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:26:09.563759    9316 out.go:374] Setting ErrFile to fd 2...
	I1210 22:26:09.563764    9316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:26:09.563985    9316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
	I1210 22:26:09.564518    9316 out.go:368] Setting JSON to true
	I1210 22:26:09.565300    9316 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":511,"bootTime":1765405059,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 22:26:09.565350    9316 start.go:143] virtualization: kvm guest
	I1210 22:26:09.567429    9316 out.go:99] [download-only-722128] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 22:26:09.567591    9316 notify.go:221] Checking for updates...
	I1210 22:26:09.568965    9316 out.go:171] MINIKUBE_LOCATION=22061
	I1210 22:26:09.570245    9316 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 22:26:09.571410    9316 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22061-5125/kubeconfig
	I1210 22:26:09.572646    9316 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5125/.minikube
	I1210 22:26:09.573874    9316 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1210 22:26:09.575924    9316 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 22:26:09.576185    9316 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 22:26:09.610543    9316 out.go:99] Using the kvm2 driver based on user configuration
	I1210 22:26:09.610586    9316 start.go:309] selected driver: kvm2
	I1210 22:26:09.610594    9316 start.go:927] validating driver "kvm2" against <nil>
	I1210 22:26:09.610932    9316 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 22:26:09.611419    9316 start_flags.go:425] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1210 22:26:09.611608    9316 start_flags.go:1113] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 22:26:09.611650    9316 cni.go:84] Creating CNI manager for ""
	I1210 22:26:09.611721    9316 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 22:26:09.611732    9316 start_flags.go:351] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 22:26:09.611831    9316 start.go:353] cluster config:
	{Name:download-only-722128 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-722128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DN
SDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketV
MnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 22:26:09.611959    9316 iso.go:125] acquiring lock: {Name:mk1091e707b59a200dfce77f9e85a41a0a31058c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 22:26:09.613262    9316 out.go:99] Starting "download-only-722128" primary control-plane node in "download-only-722128" cluster
	I1210 22:26:09.613302    9316 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 22:26:10.062163    9316 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 22:26:10.062207    9316 cache.go:65] Caching tarball of preloaded images
	I1210 22:26:10.062361    9316 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 22:26:10.064065    9316 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1210 22:26:10.064087    9316 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1210 22:26:10.162421    9316 preload.go:295] Got checksum from GCS API "40ac2ac600e3e4b9dc7a3f8c6cb2ed91"
	I1210 22:26:10.162527    9316 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:40ac2ac600e3e4b9dc7a3f8c6cb2ed91 -> /home/jenkins/minikube-integration/22061-5125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-722128 host does not exist
	  To start a cluster, run: "minikube start -p download-only-722128"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-722128
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (10.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-809442 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-809442 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.313504199s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (10.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1210 22:26:30.075698    9065 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1210 22:26:30.075744    9065 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-809442
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-809442: exit status 85 (73.425327ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-261584 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-261584 │ jenkins │ v1.37.0 │ 10 Dec 25 22:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 22:26 UTC │ 10 Dec 25 22:26 UTC │
	│ delete  │ -p download-only-261584                                                                                                                                                        │ download-only-261584 │ jenkins │ v1.37.0 │ 10 Dec 25 22:26 UTC │ 10 Dec 25 22:26 UTC │
	│ start   │ -o=json --download-only -p download-only-722128 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-722128 │ jenkins │ v1.37.0 │ 10 Dec 25 22:26 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 22:26 UTC │ 10 Dec 25 22:26 UTC │
	│ delete  │ -p download-only-722128                                                                                                                                                        │ download-only-722128 │ jenkins │ v1.37.0 │ 10 Dec 25 22:26 UTC │ 10 Dec 25 22:26 UTC │
	│ start   │ -o=json --download-only -p download-only-809442 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-809442 │ jenkins │ v1.37.0 │ 10 Dec 25 22:26 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 22:26:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 22:26:19.812476    9514 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:26:19.812590    9514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:26:19.812598    9514 out.go:374] Setting ErrFile to fd 2...
	I1210 22:26:19.812602    9514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:26:19.813208    9514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
	I1210 22:26:19.813693    9514 out.go:368] Setting JSON to true
	I1210 22:26:19.814511    9514 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":521,"bootTime":1765405059,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 22:26:19.814565    9514 start.go:143] virtualization: kvm guest
	I1210 22:26:19.816453    9514 out.go:99] [download-only-809442] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 22:26:19.816570    9514 notify.go:221] Checking for updates...
	I1210 22:26:19.818480    9514 out.go:171] MINIKUBE_LOCATION=22061
	I1210 22:26:19.820060    9514 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 22:26:19.821242    9514 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22061-5125/kubeconfig
	I1210 22:26:19.822487    9514 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5125/.minikube
	I1210 22:26:19.823735    9514 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1210 22:26:19.826187    9514 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 22:26:19.826400    9514 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 22:26:19.855791    9514 out.go:99] Using the kvm2 driver based on user configuration
	I1210 22:26:19.855826    9514 start.go:309] selected driver: kvm2
	I1210 22:26:19.855832    9514 start.go:927] validating driver "kvm2" against <nil>
	I1210 22:26:19.856117    9514 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 22:26:19.856637    9514 start_flags.go:425] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1210 22:26:19.856777    9514 start_flags.go:1113] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 22:26:19.856808    9514 cni.go:84] Creating CNI manager for ""
	I1210 22:26:19.856850    9514 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 22:26:19.856859    9514 start_flags.go:351] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 22:26:19.856902    9514 start.go:353] cluster config:
	{Name:download-only-809442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-809442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 22:26:19.856999    9514 iso.go:125] acquiring lock: {Name:mk1091e707b59a200dfce77f9e85a41a0a31058c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 22:26:19.858319    9514 out.go:99] Starting "download-only-809442" primary control-plane node in "download-only-809442" cluster
	I1210 22:26:19.858350    9514 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 22:26:20.319206    9514 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1210 22:26:20.319233    9514 cache.go:65] Caching tarball of preloaded images
	I1210 22:26:20.319374    9514 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 22:26:20.321106    9514 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1210 22:26:20.321125    9514 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1210 22:26:20.417555    9514 preload.go:295] Got checksum from GCS API "b4861df7675d96066744278d08e2cd35"
	I1210 22:26:20.417604    9514 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:b4861df7675d96066744278d08e2cd35 -> /home/jenkins/minikube-integration/22061-5125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1210 22:26:28.994317    9514 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 22:26:28.994692    9514 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/download-only-809442/config.json ...
	I1210 22:26:28.994723    9514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/download-only-809442/config.json: {Name:mkea82f373fb447bae3602343eb3f607637068f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 22:26:28.994895    9514 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 22:26:28.995096    9514 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22061-5125/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl
	
	
	* The control-plane node download-only-809442 host does not exist
	  To start a cluster, run: "minikube start -p download-only-809442"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-809442
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
I1210 22:26:30.883669    9065 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-634983 --alsologtostderr --binary-mirror http://127.0.0.1:43689 --driver=kvm2  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-634983" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-634983
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (85.27s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-584385 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-584385 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m24.3794456s)
helpers_test.go:176: Cleaning up "offline-crio-584385" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-584385
--- PASS: TestOffline (85.27s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-462156
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-462156: exit status 85 (62.383065ms)

                                                
                                                
-- stdout --
	* Profile "addons-462156" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-462156"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-462156
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-462156: exit status 85 (61.953161ms)

                                                
                                                
-- stdout --
	* Profile "addons-462156" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-462156"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (125.87s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-462156 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-462156 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m5.867969833s)
--- PASS: TestAddons/Setup (125.87s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-462156 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-462156 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-462156 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-462156 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [fad4956f-5563-487f-ab71-bb145da43547] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [fad4956f-5563-487f-ab71-bb145da43547] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.005021106s
addons_test.go:696: (dbg) Run:  kubectl --context addons-462156 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-462156 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-462156 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 5.862697ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-hbcct" [f09be740-9c3b-4dc9-ae13-adfd16ccaec2] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.196989458s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-bs796" [dd3cf5fe-024d-49ac-9781-1c16ce0767bd] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004649923s
addons_test.go:394: (dbg) Run:  kubectl --context addons-462156 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-462156 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-462156 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.288759228s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-462156 ip
2025/12/10 22:29:15 [DEBUG] GET http://192.168.39.89:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-462156 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.28s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.76s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 35.399297ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-462156
addons_test.go:334: (dbg) Run:  kubectl --context addons-462156 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-462156 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.76s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.74s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-p4rml" [945c638e-20cc-46b8-9044-67f1387daac8] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00412072s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-462156 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-462156 addons disable inspektor-gadget --alsologtostderr -v=1: (5.732094439s)
--- PASS: TestAddons/parallel/InspektorGadget (11.74s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.41s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 7.476516ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-t4kn5" [72239687-ab58-4aee-b697-075933963bfc] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.19817972s
addons_test.go:465: (dbg) Run:  kubectl --context addons-462156 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-462156 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-462156 addons disable metrics-server --alsologtostderr -v=1: (1.112475636s)
--- PASS: TestAddons/parallel/MetricsServer (7.41s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.93s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1210 22:29:16.635113    9065 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1210 22:29:16.641178    9065 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1210 22:29:16.641201    9065 kapi.go:107] duration metric: took 6.104203ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 6.115449ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-462156 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-462156 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [bbce78b3-e197-44f0-83ce-c39c65655071] Pending
helpers_test.go:353: "task-pv-pod" [bbce78b3-e197-44f0-83ce-c39c65655071] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [bbce78b3-e197-44f0-83ce-c39c65655071] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004731524s
addons_test.go:574: (dbg) Run:  kubectl --context addons-462156 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-462156 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-462156 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-462156 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-462156 delete pod task-pv-pod: (1.068180333s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-462156 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-462156 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-462156 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [dd7f6b0a-d0ef-48f7-8985-0df0c95f4b10] Pending
helpers_test.go:353: "task-pv-pod-restore" [dd7f6b0a-d0ef-48f7-8985-0df0c95f4b10] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [dd7f6b0a-d0ef-48f7-8985-0df0c95f4b10] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004217583s
addons_test.go:616: (dbg) Run:  kubectl --context addons-462156 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-462156 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-462156 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-462156 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-462156 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-462156 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.95869015s)
--- PASS: TestAddons/parallel/CSI (46.93s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-462156 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-pbqqx" [3fad6a74-823f-49b4-9501-9ae07b25d211] Pending
helpers_test.go:353: "headlamp-dfcdc64b-pbqqx" [3fad6a74-823f-49b4-9501-9ae07b25d211] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-pbqqx" [3fad6a74-823f-49b4-9501-9ae07b25d211] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.005957884s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-462156 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-462156 addons disable headlamp --alsologtostderr -v=1: (6.004461424s)
--- PASS: TestAddons/parallel/Headlamp (19.87s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-xdznz" [eaa64735-b509-4ce9-90bc-6155773e4450] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004286935s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-462156 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.88s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-462156 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-462156 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-462156 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [908b97ce-bbd2-456c-95af-484515643a63] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [908b97ce-bbd2-456c-95af-484515643a63] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [908b97ce-bbd2-456c-95af-484515643a63] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.004177292s
addons_test.go:969: (dbg) Run:  kubectl --context addons-462156 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-462156 ssh "cat /opt/local-path-provisioner/pvc-b4447a5f-b7fa-4088-983a-5d4d2b4a48d3_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-462156 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-462156 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-462156 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-462156 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.083759547s)
--- PASS: TestAddons/parallel/LocalPath (57.88s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.87s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-2knz8" [e3f636bc-8db9-4dc3-851a-f1331a2516e8] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.206423718s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-462156 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.87s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-blzkw" [b4a72f04-a1e1-468c-8b32-ed3eee303158] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005006836s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-462156 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-462156 addons disable yakd --alsologtostderr -v=1: (6.259122287s)
--- PASS: TestAddons/parallel/Yakd (12.26s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (80.07s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-462156
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-462156: (1m19.860621124s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-462156
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-462156
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-462156
--- PASS: TestAddons/StoppedEnableDisable (80.07s)

                                                
                                    
x
+
TestCertOptions (52.56s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-974070 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-974070 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (51.178010087s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-974070 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-974070 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-974070 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-974070" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-974070
--- PASS: TestCertOptions (52.56s)

                                                
                                    
x
+
TestCertExpiration (298.67s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-450079 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-450079 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (55.835376015s)
E1210 23:27:35.029239    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-450079 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-450079 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m1.868339568s)
helpers_test.go:176: Cleaning up "cert-expiration-450079" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-450079
--- PASS: TestCertExpiration (298.67s)

                                                
                                    
x
+
TestForceSystemdFlag (40.52s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-839649 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-839649 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (39.409424299s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-839649 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-839649" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-839649
--- PASS: TestForceSystemdFlag (40.52s)

                                                
                                    
x
+
TestForceSystemdEnv (75.24s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-283451 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-283451 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m14.360974914s)
helpers_test.go:176: Cleaning up "force-systemd-env-283451" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-283451
--- PASS: TestForceSystemdEnv (75.24s)

                                                
                                    
x
+
TestErrorSpam/setup (40.56s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-900835 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-900835 --driver=kvm2  --container-runtime=crio
E1210 22:33:38.666353    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:33:38.672811    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:33:38.684239    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:33:38.705706    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:33:38.747179    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:33:38.828708    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:33:38.990275    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:33:39.312132    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:33:39.954226    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:33:41.235856    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:33:43.798600    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:33:48.920111    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-900835 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-900835 --driver=kvm2  --container-runtime=crio: (40.557379122s)
--- PASS: TestErrorSpam/setup (40.56s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-900835 --log_dir /tmp/nospam-900835 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-900835 --log_dir /tmp/nospam-900835 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-900835 --log_dir /tmp/nospam-900835 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-900835 --log_dir /tmp/nospam-900835 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-900835 --log_dir /tmp/nospam-900835 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-900835 --log_dir /tmp/nospam-900835 status
--- PASS: TestErrorSpam/status (0.65s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-900835 --log_dir /tmp/nospam-900835 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-900835 --log_dir /tmp/nospam-900835 pause
E1210 22:33:59.161410    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-900835 --log_dir /tmp/nospam-900835 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-900835 --log_dir /tmp/nospam-900835 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-900835 --log_dir /tmp/nospam-900835 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-900835 --log_dir /tmp/nospam-900835 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (76.11s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-900835 --log_dir /tmp/nospam-900835 stop
E1210 22:34:19.643131    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:35:00.606212    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-900835 --log_dir /tmp/nospam-900835 stop: (1m12.559750259s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-900835 --log_dir /tmp/nospam-900835 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-900835 --log_dir /tmp/nospam-900835 stop: (1.852557825s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-900835 --log_dir /tmp/nospam-900835 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-900835 --log_dir /tmp/nospam-900835 stop: (1.701132029s)
--- PASS: TestErrorSpam/stop (76.11s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22061-5125/.minikube/files/etc/test/nested/copy/9065/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.06s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-820240 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-820240 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (51.062374558s)
--- PASS: TestFunctional/serial/StartWithProxy (51.06s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (31.71s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1210 22:36:09.048145    9065 config.go:182] Loaded profile config "functional-820240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-820240 --alsologtostderr -v=8
E1210 22:36:22.527562    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-820240 --alsologtostderr -v=8: (31.705614717s)
functional_test.go:678: soft start took 31.706316047s for "functional-820240" cluster.
I1210 22:36:40.754096    9065 config.go:182] Loaded profile config "functional-820240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (31.71s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-820240 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-820240 cache add registry.k8s.io/pause:3.1: (1.07318529s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-820240 cache add registry.k8s.io/pause:3.3: (1.142819527s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-820240 cache add registry.k8s.io/pause:latest: (1.098940331s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-820240 /tmp/TestFunctionalserialCacheCmdcacheadd_local3354466684/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 cache add minikube-local-cache-test:functional-820240
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-820240 cache add minikube-local-cache-test:functional-820240: (1.776614884s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 cache delete minikube-local-cache-test:functional-820240
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-820240
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-820240 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (171.088442ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 kubectl -- --context functional-820240 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-820240 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.45s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-820240 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-820240 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.451938458s)
functional_test.go:776: restart took 39.452042587s for "functional-820240" cluster.
I1210 22:37:27.961083    9065 config.go:182] Loaded profile config "functional-820240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (39.45s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-820240 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-820240 logs: (1.296375528s)
--- PASS: TestFunctional/serial/LogsCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 logs --file /tmp/TestFunctionalserialLogsFileCmd3037495793/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-820240 logs --file /tmp/TestFunctionalserialLogsFileCmd3037495793/001/logs.txt: (1.352220699s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.35s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-820240 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-820240
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-820240: exit status 115 (236.805964ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.235:30854 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-820240 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.35s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-820240 config get cpus: exit status 14 (74.31765ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-820240 config get cpus: exit status 14 (58.606202ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (47.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-820240 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-820240 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 15423: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (47.48s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-820240 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-820240 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (113.853796ms)

                                                
                                                
-- stdout --
	* [functional-820240] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-5125/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5125/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:37:47.456698   15350 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:37:47.456822   15350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:37:47.456833   15350 out.go:374] Setting ErrFile to fd 2...
	I1210 22:37:47.456840   15350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:37:47.457054   15350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
	I1210 22:37:47.457492   15350 out.go:368] Setting JSON to false
	I1210 22:37:47.458337   15350 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1208,"bootTime":1765405059,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 22:37:47.458387   15350 start.go:143] virtualization: kvm guest
	I1210 22:37:47.460389   15350 out.go:179] * [functional-820240] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 22:37:47.461613   15350 notify.go:221] Checking for updates...
	I1210 22:37:47.461676   15350 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 22:37:47.462986   15350 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 22:37:47.464226   15350 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5125/kubeconfig
	I1210 22:37:47.465561   15350 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5125/.minikube
	I1210 22:37:47.466746   15350 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 22:37:47.467973   15350 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 22:37:47.469502   15350 config.go:182] Loaded profile config "functional-820240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:37:47.469940   15350 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 22:37:47.503261   15350 out.go:179] * Using the kvm2 driver based on existing profile
	I1210 22:37:47.504617   15350 start.go:309] selected driver: kvm2
	I1210 22:37:47.504634   15350 start.go:927] validating driver "kvm2" against &{Name:functional-820240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:
22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-820240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 IPv6: Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPor
ts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 22:37:47.504742   15350 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 22:37:47.506896   15350 out.go:203] 
	W1210 22:37:47.508323   15350 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1210 22:37:47.509585   15350 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-820240 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-820240 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-820240 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (121.042616ms)

                                                
                                                
-- stdout --
	* [functional-820240] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-5125/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5125/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:37:47.701650   15390 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:37:47.701841   15390 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:37:47.701855   15390 out.go:374] Setting ErrFile to fd 2...
	I1210 22:37:47.701862   15390 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:37:47.702355   15390 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
	I1210 22:37:47.703073   15390 out.go:368] Setting JSON to false
	I1210 22:37:47.704304   15390 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1209,"bootTime":1765405059,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 22:37:47.704397   15390 start.go:143] virtualization: kvm guest
	I1210 22:37:47.706213   15390 out.go:179] * [functional-820240] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1210 22:37:47.708291   15390 notify.go:221] Checking for updates...
	I1210 22:37:47.708313   15390 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 22:37:47.709903   15390 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 22:37:47.711368   15390 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5125/kubeconfig
	I1210 22:37:47.712945   15390 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5125/.minikube
	I1210 22:37:47.714135   15390 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 22:37:47.715251   15390 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 22:37:47.717189   15390 config.go:182] Loaded profile config "functional-820240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:37:47.717847   15390 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 22:37:47.749657   15390 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1210 22:37:47.750720   15390 start.go:309] selected driver: kvm2
	I1210 22:37:47.750742   15390 start.go:927] validating driver "kvm2" against &{Name:functional-820240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:
22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-820240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 IPv6: Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPor
ts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 22:37:47.750894   15390 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 22:37:47.753200   15390 out.go:203] 
	W1210 22:37:47.754298   15390 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 22:37:47.755373   15390 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-820240 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-820240 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-mb9w9" [0cfa87ec-ab8a-4f39-b870-f05f5e1a6b02] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-mb9w9" [0cfa87ec-ab8a-4f39-b870-f05f5e1a6b02] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004649554s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.235:32649
functional_test.go:1680: http://192.168.39.235:32649: success! body:
Request served by hello-node-connect-7d85dfc575-mb9w9

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.235:32649
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (37.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [3eabc4fe-b3fc-4c5e-8d99-6fbd6e1c4993] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004624095s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-820240 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-820240 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-820240 get pvc myclaim -o=json
I1210 22:37:41.283517    9065 retry.go:31] will retry after 1.260202488s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:7c8d49f9-bd74-4b12-9ae2-4a7947ea3c8d ResourceVersion:734 Generation:0 CreationTimestamp:2025-12-10 22:37:41 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001ca6c10 VolumeMode:0xc001ca6c20 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-820240 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-820240 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [a43a54ac-1bd8-4932-924a-fae3424ea5f3] Pending
helpers_test.go:353: "sp-pod" [a43a54ac-1bd8-4932-924a-fae3424ea5f3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [a43a54ac-1bd8-4932-924a-fae3424ea5f3] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004372375s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-820240 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-820240 exec sp-pod -- touch /tmp/mount/foo: (1.170615723s)
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-820240 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-820240 delete -f testdata/storage-provisioner/pod.yaml: (4.27180922s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-820240 apply -f testdata/storage-provisioner/pod.yaml
I1210 22:38:00.343572    9065 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [db4b4213-d053-44e8-948c-098783b504b2] Pending
helpers_test.go:353: "sp-pod" [db4b4213-d053-44e8-948c-098783b504b2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [db4b4213-d053-44e8-948c-098783b504b2] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.006783141s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-820240 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (37.85s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh -n functional-820240 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 cp functional-820240:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1409907642/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh -n functional-820240 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh -n functional-820240 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-820240 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-hskng" [780ebc56-832c-4cf1-b572-de1a1120d69f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-hskng" [780ebc56-832c-4cf1-b572-de1a1120d69f] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.006705101s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-820240 exec mysql-6bcdcbc558-hskng -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-820240 exec mysql-6bcdcbc558-hskng -- mysql -ppassword -e "show databases;": exit status 1 (290.949054ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 22:38:12.343369    9065 retry.go:31] will retry after 997.795814ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-820240 exec mysql-6bcdcbc558-hskng -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-820240 exec mysql-6bcdcbc558-hskng -- mysql -ppassword -e "show databases;": exit status 1 (390.892069ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 22:38:13.732923    9065 retry.go:31] will retry after 860.83631ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-820240 exec mysql-6bcdcbc558-hskng -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-820240 exec mysql-6bcdcbc558-hskng -- mysql -ppassword -e "show databases;": exit status 1 (427.253643ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 22:38:15.021856    9065 retry.go:31] will retry after 2.581210832s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-820240 exec mysql-6bcdcbc558-hskng -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (31.63s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9065/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh "sudo cat /etc/test/nested/copy/9065/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9065.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh "sudo cat /etc/ssl/certs/9065.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9065.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh "sudo cat /usr/share/ca-certificates/9065.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/90652.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh "sudo cat /etc/ssl/certs/90652.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/90652.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh "sudo cat /usr/share/ca-certificates/90652.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-820240 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-820240 ssh "sudo systemctl is-active docker": exit status 1 (155.467306ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-820240 ssh "sudo systemctl is-active containerd": exit status 1 (158.541996ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-820240 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-820240 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-6lglj" [c4df8b52-3a71-488a-bbc0-570432a9a7b8] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-6lglj" [c4df8b52-3a71-488a-bbc0-570432a9a7b8] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.005400088s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-amd64 -p functional-820240 version -o=json --components: (1.074921136s)
--- PASS: TestFunctional/parallel/Version/components (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-820240 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-820240
localhost/kicbase/echo-server:functional-820240
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-820240 image ls --format short --alsologtostderr:
I1210 22:38:14.080872   15714 out.go:360] Setting OutFile to fd 1 ...
I1210 22:38:14.081162   15714 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:38:14.081172   15714 out.go:374] Setting ErrFile to fd 2...
I1210 22:38:14.081176   15714 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:38:14.081404   15714 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
I1210 22:38:14.082007   15714 config.go:182] Loaded profile config "functional-820240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 22:38:14.082107   15714 config.go:182] Loaded profile config "functional-820240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 22:38:14.084568   15714 ssh_runner.go:195] Run: systemctl --version
I1210 22:38:14.087046   15714 main.go:143] libmachine: domain functional-820240 has defined MAC address 52:54:00:60:d8:ab in network mk-functional-820240
I1210 22:38:14.087477   15714 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:d8:ab", ip: ""} in network mk-functional-820240: {Iface:virbr1 ExpiryTime:2025-12-10 23:35:32 +0000 UTC Type:0 Mac:52:54:00:60:d8:ab Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:functional-820240 Clientid:01:52:54:00:60:d8:ab}
I1210 22:38:14.087511   15714 main.go:143] libmachine: domain functional-820240 has defined IP address 192.168.39.235 and MAC address 52:54:00:60:d8:ab in network mk-functional-820240
I1210 22:38:14.087686   15714 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/functional-820240/id_rsa Username:docker}
I1210 22:38:14.198587   15714 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-820240 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ localhost/minikube-local-cache-test     │ functional-820240  │ f1af58d46fafa │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-820240  │ 9056ab77afb8e │ 4.95MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-820240 image ls --format table --alsologtostderr:
I1210 22:38:15.871644   15855 out.go:360] Setting OutFile to fd 1 ...
I1210 22:38:15.871737   15855 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:38:15.871742   15855 out.go:374] Setting ErrFile to fd 2...
I1210 22:38:15.871753   15855 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:38:15.872093   15855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
I1210 22:38:15.872713   15855 config.go:182] Loaded profile config "functional-820240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 22:38:15.872807   15855 config.go:182] Loaded profile config "functional-820240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 22:38:15.874986   15855 ssh_runner.go:195] Run: systemctl --version
I1210 22:38:15.877377   15855 main.go:143] libmachine: domain functional-820240 has defined MAC address 52:54:00:60:d8:ab in network mk-functional-820240
I1210 22:38:15.877780   15855 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:d8:ab", ip: ""} in network mk-functional-820240: {Iface:virbr1 ExpiryTime:2025-12-10 23:35:32 +0000 UTC Type:0 Mac:52:54:00:60:d8:ab Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:functional-820240 Clientid:01:52:54:00:60:d8:ab}
I1210 22:38:15.877809   15855 main.go:143] libmachine: domain functional-820240 has defined IP address 192.168.39.235 and MAC address 52:54:00:60:d8:ab in network mk-functional-820240
I1210 22:38:15.877974   15855 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/functional-820240/id_rsa Username:docker}
I1210 22:38:15.968502   15855 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-820240 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-820240"],"size":"4945146"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a
2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a2
61e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"f1af58d46fafac9b7a6e63ff9d077b88f6bf8967934e97b9b7281f05b44e7424","repoDigests":["localhost/minikube-local-cache-test@sha256:b4e146d445108f7495d0757c01299c97b3d87fcd7b6911267184c95d68370d57"],"repoTags":["localhost/minikube-local-cache-test:functional-820240"],"size":"3330"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff",
"public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d31
1b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDi
gests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-schedu
ler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-820240 image ls --format json --alsologtostderr:
I1210 22:38:15.569022   15845 out.go:360] Setting OutFile to fd 1 ...
I1210 22:38:15.569109   15845 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:38:15.569116   15845 out.go:374] Setting ErrFile to fd 2...
I1210 22:38:15.569121   15845 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:38:15.569297   15845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
I1210 22:38:15.569853   15845 config.go:182] Loaded profile config "functional-820240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 22:38:15.569940   15845 config.go:182] Loaded profile config "functional-820240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 22:38:15.572154   15845 ssh_runner.go:195] Run: systemctl --version
I1210 22:38:15.574595   15845 main.go:143] libmachine: domain functional-820240 has defined MAC address 52:54:00:60:d8:ab in network mk-functional-820240
I1210 22:38:15.574951   15845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:d8:ab", ip: ""} in network mk-functional-820240: {Iface:virbr1 ExpiryTime:2025-12-10 23:35:32 +0000 UTC Type:0 Mac:52:54:00:60:d8:ab Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:functional-820240 Clientid:01:52:54:00:60:d8:ab}
I1210 22:38:15.574975   15845 main.go:143] libmachine: domain functional-820240 has defined IP address 192.168.39.235 and MAC address 52:54:00:60:d8:ab in network mk-functional-820240
I1210 22:38:15.575123   15845 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/functional-820240/id_rsa Username:docker}
I1210 22:38:15.741099   15845 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-820240 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-820240
size: "4945146"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: f1af58d46fafac9b7a6e63ff9d077b88f6bf8967934e97b9b7281f05b44e7424
repoDigests:
- localhost/minikube-local-cache-test@sha256:b4e146d445108f7495d0757c01299c97b3d87fcd7b6911267184c95d68370d57
repoTags:
- localhost/minikube-local-cache-test:functional-820240
size: "3330"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-820240 image ls --format yaml --alsologtostderr:
I1210 22:38:14.496761   15763 out.go:360] Setting OutFile to fd 1 ...
I1210 22:38:14.496874   15763 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:38:14.496885   15763 out.go:374] Setting ErrFile to fd 2...
I1210 22:38:14.496892   15763 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:38:14.497103   15763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
I1210 22:38:14.497722   15763 config.go:182] Loaded profile config "functional-820240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 22:38:14.497844   15763 config.go:182] Loaded profile config "functional-820240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 22:38:14.499991   15763 ssh_runner.go:195] Run: systemctl --version
I1210 22:38:14.502399   15763 main.go:143] libmachine: domain functional-820240 has defined MAC address 52:54:00:60:d8:ab in network mk-functional-820240
I1210 22:38:14.502877   15763 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:d8:ab", ip: ""} in network mk-functional-820240: {Iface:virbr1 ExpiryTime:2025-12-10 23:35:32 +0000 UTC Type:0 Mac:52:54:00:60:d8:ab Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:functional-820240 Clientid:01:52:54:00:60:d8:ab}
I1210 22:38:14.502908   15763 main.go:143] libmachine: domain functional-820240 has defined IP address 192.168.39.235 and MAC address 52:54:00:60:d8:ab in network mk-functional-820240
I1210 22:38:14.503042   15763 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/functional-820240/id_rsa Username:docker}
I1210 22:38:14.739876   15763 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (7.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-820240 ssh pgrep buildkitd: exit status 1 (232.290205ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 image build -t localhost/my-image:functional-820240 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-820240 image build -t localhost/my-image:functional-820240 testdata/build --alsologtostderr: (6.742069926s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-820240 image build -t localhost/my-image:functional-820240 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4378b5476c6
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-820240
--> 3a8127f7882
Successfully tagged localhost/my-image:functional-820240
3a8127f7882406cc0a6f587332c3047580dc2bed3ccf991fb42bfa942cae4504
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-820240 image build -t localhost/my-image:functional-820240 testdata/build --alsologtostderr:
I1210 22:38:15.198610   15813 out.go:360] Setting OutFile to fd 1 ...
I1210 22:38:15.198800   15813 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:38:15.198811   15813 out.go:374] Setting ErrFile to fd 2...
I1210 22:38:15.198816   15813 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:38:15.199029   15813 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
I1210 22:38:15.199659   15813 config.go:182] Loaded profile config "functional-820240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 22:38:15.200339   15813 config.go:182] Loaded profile config "functional-820240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 22:38:15.202788   15813 ssh_runner.go:195] Run: systemctl --version
I1210 22:38:15.205219   15813 main.go:143] libmachine: domain functional-820240 has defined MAC address 52:54:00:60:d8:ab in network mk-functional-820240
I1210 22:38:15.205757   15813 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:d8:ab", ip: ""} in network mk-functional-820240: {Iface:virbr1 ExpiryTime:2025-12-10 23:35:32 +0000 UTC Type:0 Mac:52:54:00:60:d8:ab Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:functional-820240 Clientid:01:52:54:00:60:d8:ab}
I1210 22:38:15.205796   15813 main.go:143] libmachine: domain functional-820240 has defined IP address 192.168.39.235 and MAC address 52:54:00:60:d8:ab in network mk-functional-820240
I1210 22:38:15.205992   15813 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/functional-820240/id_rsa Username:docker}
I1210 22:38:15.328783   15813 build_images.go:162] Building image from path: /tmp/build.1872026948.tar
I1210 22:38:15.328859   15813 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1210 22:38:15.380241   15813 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1872026948.tar
I1210 22:38:15.392893   15813 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1872026948.tar: stat -c "%s %y" /var/lib/minikube/build/build.1872026948.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1872026948.tar': No such file or directory
I1210 22:38:15.392947   15813 ssh_runner.go:362] scp /tmp/build.1872026948.tar --> /var/lib/minikube/build/build.1872026948.tar (3072 bytes)
I1210 22:38:15.481956   15813 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1872026948
I1210 22:38:15.523215   15813 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1872026948 -xf /var/lib/minikube/build/build.1872026948.tar
I1210 22:38:15.562515   15813 crio.go:315] Building image: /var/lib/minikube/build/build.1872026948
I1210 22:38:15.562601   15813 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-820240 /var/lib/minikube/build/build.1872026948 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1210 22:38:21.827729   15813 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-820240 /var/lib/minikube/build/build.1872026948 --cgroup-manager=cgroupfs: (6.265105474s)
I1210 22:38:21.827788   15813 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1872026948
I1210 22:38:21.858578   15813 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1872026948.tar
I1210 22:38:21.878499   15813 build_images.go:218] Built localhost/my-image:functional-820240 from /tmp/build.1872026948.tar
I1210 22:38:21.878534   15813 build_images.go:134] succeeded building to: functional-820240
I1210 22:38:21.878539   15813 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 image ls
2025/12/10 22:38:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (7.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.754542422s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-820240
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 image load --daemon kicbase/echo-server:functional-820240 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 image load --daemon kicbase/echo-server:functional-820240 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-820240
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 image load --daemon kicbase/echo-server:functional-820240 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 image save kicbase/echo-server:functional-820240 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
I1210 22:37:42.745259    9065 detect.go:223] nested VM detected
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 image rm kicbase/echo-server:functional-820240 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-820240
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 image save --daemon kicbase/echo-server:functional-820240 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-820240
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 service list -o json
functional_test.go:1504: Took "240.462247ms" to run "out/minikube-linux-amd64 -p functional-820240 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.235:30768
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.235:30768
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (26.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-820240 /tmp/TestFunctionalparallelMountCmdany-port1171942670/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765406266149815847" to /tmp/TestFunctionalparallelMountCmdany-port1171942670/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765406266149815847" to /tmp/TestFunctionalparallelMountCmdany-port1171942670/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765406266149815847" to /tmp/TestFunctionalparallelMountCmdany-port1171942670/001/test-1765406266149815847
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-820240 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (272.11708ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 22:37:46.422284    9065 retry.go:31] will retry after 447.42602ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 10 22:37 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 10 22:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 10 22:37 test-1765406266149815847
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh cat /mount-9p/test-1765406266149815847
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-820240 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [2f206a50-1b86-4940-9b5f-74d2dd4ccdac] Pending
helpers_test.go:353: "busybox-mount" [2f206a50-1b86-4940-9b5f-74d2dd4ccdac] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [2f206a50-1b86-4940-9b5f-74d2dd4ccdac] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [2f206a50-1b86-4940-9b5f-74d2dd4ccdac] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 24.015119373s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-820240 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-820240 /tmp/TestFunctionalparallelMountCmdany-port1171942670/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (26.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "345.822187ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "77.759997ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "300.585166ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "64.956198ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-820240 /tmp/TestFunctionalparallelMountCmdspecific-port2132410708/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-820240 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (235.382956ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 22:38:12.764828    9065 retry.go:31] will retry after 434.677723ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-820240 /tmp/TestFunctionalparallelMountCmdspecific-port2132410708/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-820240 ssh "sudo umount -f /mount-9p": exit status 1 (223.978009ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-820240 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-820240 /tmp/TestFunctionalparallelMountCmdspecific-port2132410708/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-820240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4230002328/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-820240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4230002328/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-820240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4230002328/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-820240 ssh "findmnt -T" /mount1: exit status 1 (320.725691ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 22:38:14.481465    9065 retry.go:31] will retry after 286.795711ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-820240 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-820240 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-820240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4230002328/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-820240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4230002328/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-820240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4230002328/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.35s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-820240
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-820240
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-820240
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22061-5125/.minikube/files/etc/test/nested/copy/9065/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (76.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-497660 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1210 22:38:38.667317    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:39:06.369601    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-497660 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m16.969915245s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (76.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (52.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1210 22:39:53.250996    9065 config.go:182] Loaded profile config "functional-497660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-497660 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-497660 --alsologtostderr -v=8: (52.497371512s)
functional_test.go:678: soft start took 52.497786224s for "functional-497660" cluster.
I1210 22:40:45.748815    9065 config.go:182] Loaded profile config "functional-497660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (52.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-497660 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-497660 cache add registry.k8s.io/pause:3.1: (1.060798977s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-497660 cache add registry.k8s.io/pause:3.3: (1.175538305s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-497660 cache add registry.k8s.io/pause:latest: (1.098976699s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-497660 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach1220709829/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 cache add minikube-local-cache-test:functional-497660
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-497660 cache add minikube-local-cache-test:functional-497660: (1.750597817s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 cache delete minikube-local-cache-test:functional-497660
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-497660
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497660 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (175.756914ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 kubectl -- --context functional-497660 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-497660 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (37.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-497660 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-497660 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.633455339s)
functional_test.go:776: restart took 37.633551785s for "functional-497660" cluster.
I1210 22:41:31.146657    9065 config.go:182] Loaded profile config "functional-497660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (37.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-497660 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-497660 logs: (1.303382436s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2688833009/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-497660 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2688833009/001/logs.txt: (1.300623513s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-497660 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-497660
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-497660: exit status 115 (228.043007ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.7:32005 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-497660 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497660 config get cpus: exit status 14 (64.63438ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497660 config get cpus: exit status 14 (56.852691ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-497660 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-497660 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (121.052158ms)

                                                
                                                
-- stdout --
	* [functional-497660] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-5125/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5125/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:42:16.425968   18350 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:42:16.426224   18350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:42:16.426234   18350 out.go:374] Setting ErrFile to fd 2...
	I1210 22:42:16.426238   18350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:42:16.426513   18350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
	I1210 22:42:16.427020   18350 out.go:368] Setting JSON to false
	I1210 22:42:16.427911   18350 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1477,"bootTime":1765405059,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 22:42:16.427963   18350 start.go:143] virtualization: kvm guest
	I1210 22:42:16.430039   18350 out.go:179] * [functional-497660] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 22:42:16.431392   18350 notify.go:221] Checking for updates...
	I1210 22:42:16.431419   18350 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 22:42:16.433069   18350 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 22:42:16.434576   18350 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5125/kubeconfig
	I1210 22:42:16.435772   18350 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5125/.minikube
	I1210 22:42:16.439632   18350 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 22:42:16.440958   18350 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 22:42:16.442883   18350 config.go:182] Loaded profile config "functional-497660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 22:42:16.443533   18350 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 22:42:16.476182   18350 out.go:179] * Using the kvm2 driver based on existing profile
	I1210 22:42:16.477254   18350 start.go:309] selected driver: kvm2
	I1210 22:42:16.477271   18350 start.go:927] validating driver "kvm2" against &{Name:functional-497660 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:
22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-497660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Sch
eduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 22:42:16.477394   18350 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 22:42:16.479608   18350 out.go:203] 
	W1210 22:42:16.480900   18350 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1210 22:42:16.481972   18350 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-497660 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-497660 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-497660 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (114.232681ms)

                                                
                                                
-- stdout --
	* [functional-497660] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-5125/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5125/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:41:45.657786   17936 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:41:45.657874   17936 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:41:45.657881   17936 out.go:374] Setting ErrFile to fd 2...
	I1210 22:41:45.657885   17936 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:41:45.658150   17936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
	I1210 22:41:45.658607   17936 out.go:368] Setting JSON to false
	I1210 22:41:45.659409   17936 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1447,"bootTime":1765405059,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 22:41:45.659473   17936 start.go:143] virtualization: kvm guest
	I1210 22:41:45.661462   17936 out.go:179] * [functional-497660] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1210 22:41:45.662600   17936 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 22:41:45.662601   17936 notify.go:221] Checking for updates...
	I1210 22:41:45.664881   17936 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 22:41:45.666030   17936 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5125/kubeconfig
	I1210 22:41:45.667308   17936 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5125/.minikube
	I1210 22:41:45.668478   17936 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 22:41:45.669711   17936 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 22:41:45.671406   17936 config.go:182] Loaded profile config "functional-497660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 22:41:45.671891   17936 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 22:41:45.702980   17936 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1210 22:41:45.704276   17936 start.go:309] selected driver: kvm2
	I1210 22:41:45.704290   17936 start.go:927] validating driver "kvm2" against &{Name:functional-497660 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:
22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-497660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil
> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 22:41:45.704392   17936 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 22:41:45.706204   17936 out.go:203] 
	W1210 22:41:45.707514   17936 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 22:41:45.708681   17936 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (10.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-497660 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-497660 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-x9n2b" [76e6c958-6c1b-46f7-9691-cae03089c3c9] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-x9n2b" [76e6c958-6c1b-46f7-9691-cae03089c3c9] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003841819s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.7:31599
functional_test.go:1680: http://192.168.39.7:31599: success! body:
Request served by hello-node-connect-9f67c86d4-x9n2b

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.7:31599
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (10.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (37.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [69df49db-a6bb-4224-a082-ef172c852dbd] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003688903s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-497660 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-497660 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-497660 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-497660 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [6538fbdc-5028-4154-ae27-6b887ff06a15] Pending
helpers_test.go:353: "sp-pod" [6538fbdc-5028-4154-ae27-6b887ff06a15] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [6538fbdc-5028-4154-ae27-6b887ff06a15] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.006955666s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-497660 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-497660 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-497660 delete -f testdata/storage-provisioner/pod.yaml: (5.303610648s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-497660 apply -f testdata/storage-provisioner/pod.yaml
I1210 22:42:05.048079    9065 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [ef6f0c10-336b-4f80-ae8e-4fd51f3dc27a] Pending
helpers_test.go:353: "sp-pod" [ef6f0c10-336b-4f80-ae8e-4fd51f3dc27a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [ef6f0c10-336b-4f80-ae8e-4fd51f3dc27a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.031465735s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-497660 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (37.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh -n functional-497660 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 cp functional-497660:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3426549446/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh -n functional-497660 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh -n functional-497660 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (41.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-497660 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-8bc26" [f796a587-8be2-4454-b9d4-117d209d6c8e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-8bc26" [f796a587-8be2-4454-b9d4-117d209d6c8e] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 30.01766895s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-497660 exec mysql-7d7b65bc95-8bc26 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-497660 exec mysql-7d7b65bc95-8bc26 -- mysql -ppassword -e "show databases;": exit status 1 (307.157644ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 22:42:16.210599    9065 retry.go:31] will retry after 1.490270358s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-497660 exec mysql-7d7b65bc95-8bc26 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-497660 exec mysql-7d7b65bc95-8bc26 -- mysql -ppassword -e "show databases;": exit status 1 (226.069099ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 22:42:17.927472    9065 retry.go:31] will retry after 1.006765254s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-497660 exec mysql-7d7b65bc95-8bc26 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-497660 exec mysql-7d7b65bc95-8bc26 -- mysql -ppassword -e "show databases;": exit status 1 (248.922387ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 22:42:19.184767    9065 retry.go:31] will retry after 2.982538125s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-497660 exec mysql-7d7b65bc95-8bc26 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-497660 exec mysql-7d7b65bc95-8bc26 -- mysql -ppassword -e "show databases;": exit status 1 (161.374295ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 22:42:22.329004    9065 retry.go:31] will retry after 4.960962546s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-497660 exec mysql-7d7b65bc95-8bc26 -- mysql -ppassword -e "show databases;"
E1210 22:42:35.034168    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:42:35.040560    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:42:35.052065    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:42:35.073493    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:42:35.114977    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:42:35.196458    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:42:35.358008    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:42:35.679816    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:42:36.321939    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:42:37.603573    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:42:40.166170    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:42:45.287740    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:42:55.529829    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:43:16.011667    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:43:38.666079    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:43:56.973318    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:45:18.894766    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (41.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9065/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh "sudo cat /etc/test/nested/copy/9065/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9065.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh "sudo cat /etc/ssl/certs/9065.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9065.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh "sudo cat /usr/share/ca-certificates/9065.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/90652.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh "sudo cat /etc/ssl/certs/90652.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/90652.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh "sudo cat /usr/share/ca-certificates/90652.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-497660 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497660 ssh "sudo systemctl is-active docker": exit status 1 (174.930018ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497660 ssh "sudo systemctl is-active containerd": exit status 1 (178.740934ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-497660 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-497660
localhost/kicbase/echo-server:functional-497660
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-497660 image ls --format short --alsologtostderr:
I1210 22:42:17.808401   18519 out.go:360] Setting OutFile to fd 1 ...
I1210 22:42:17.808549   18519 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:42:17.808564   18519 out.go:374] Setting ErrFile to fd 2...
I1210 22:42:17.808571   18519 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:42:17.808936   18519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
I1210 22:42:17.809814   18519 config.go:182] Loaded profile config "functional-497660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 22:42:17.809963   18519 config.go:182] Loaded profile config "functional-497660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 22:42:17.812796   18519 ssh_runner.go:195] Run: systemctl --version
I1210 22:42:17.815625   18519 main.go:143] libmachine: domain functional-497660 has defined MAC address 52:54:00:26:f5:2e in network mk-functional-497660
I1210 22:42:17.816098   18519 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:26:f5:2e", ip: ""} in network mk-functional-497660: {Iface:virbr1 ExpiryTime:2025-12-10 23:38:51 +0000 UTC Type:0 Mac:52:54:00:26:f5:2e Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:functional-497660 Clientid:01:52:54:00:26:f5:2e}
I1210 22:42:17.816123   18519 main.go:143] libmachine: domain functional-497660 has defined IP address 192.168.39.7 and MAC address 52:54:00:26:f5:2e in network mk-functional-497660
I1210 22:42:17.816295   18519 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/functional-497660/id_rsa Username:docker}
I1210 22:42:17.915874   18519 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-497660 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-497660  │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-497660  │ f1af58d46fafa │ 3.33kB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-497660 image ls --format table --alsologtostderr:
I1210 22:42:20.855637   18701 out.go:360] Setting OutFile to fd 1 ...
I1210 22:42:20.855893   18701 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:42:20.855904   18701 out.go:374] Setting ErrFile to fd 2...
I1210 22:42:20.855909   18701 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:42:20.856157   18701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
I1210 22:42:20.856842   18701 config.go:182] Loaded profile config "functional-497660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 22:42:20.856959   18701 config.go:182] Loaded profile config "functional-497660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 22:42:20.859110   18701 ssh_runner.go:195] Run: systemctl --version
I1210 22:42:20.861771   18701 main.go:143] libmachine: domain functional-497660 has defined MAC address 52:54:00:26:f5:2e in network mk-functional-497660
I1210 22:42:20.862123   18701 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:26:f5:2e", ip: ""} in network mk-functional-497660: {Iface:virbr1 ExpiryTime:2025-12-10 23:38:51 +0000 UTC Type:0 Mac:52:54:00:26:f5:2e Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:functional-497660 Clientid:01:52:54:00:26:f5:2e}
I1210 22:42:20.862146   18701 main.go:143] libmachine: domain functional-497660 has defined IP address 192.168.39.7 and MAC address 52:54:00:26:f5:2e in network mk-functional-497660
I1210 22:42:20.862282   18701 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/functional-497660/id_rsa Username:docker}
I1210 22:42:21.018831   18701 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-497660 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e45
11d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90819569"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTa
gs":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-497660"],"size":"4943877"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c144
7dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a","registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71977881"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52747095"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"a3e246e9556e93d71e2850085ba581b37
6c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"f1af58d46fafac9b7a6e63ff9d077b88f6bf8967934e97b9b7281f05b44e7424","repoDigests":["localhost/minikube
-local-cache-test@sha256:b4e146d445108f7495d0757c01299c97b3d87fcd7b6911267184c95d68370d57"],"repoTags":["localhost/minikube-local-cache-test:functional-497660"],"size":"3330"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76872535"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed64
7b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-497660 image ls --format json --alsologtostderr:
I1210 22:42:20.572509   18690 out.go:360] Setting OutFile to fd 1 ...
I1210 22:42:20.572608   18690 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:42:20.572618   18690 out.go:374] Setting ErrFile to fd 2...
I1210 22:42:20.572624   18690 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:42:20.572832   18690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
I1210 22:42:20.573358   18690 config.go:182] Loaded profile config "functional-497660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 22:42:20.573486   18690 config.go:182] Loaded profile config "functional-497660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 22:42:20.575510   18690 ssh_runner.go:195] Run: systemctl --version
I1210 22:42:20.577650   18690 main.go:143] libmachine: domain functional-497660 has defined MAC address 52:54:00:26:f5:2e in network mk-functional-497660
I1210 22:42:20.578061   18690 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:26:f5:2e", ip: ""} in network mk-functional-497660: {Iface:virbr1 ExpiryTime:2025-12-10 23:38:51 +0000 UTC Type:0 Mac:52:54:00:26:f5:2e Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:functional-497660 Clientid:01:52:54:00:26:f5:2e}
I1210 22:42:20.578092   18690 main.go:143] libmachine: domain functional-497660 has defined IP address 192.168.39.7 and MAC address 52:54:00:26:f5:2e in network mk-functional-497660
I1210 22:42:20.578256   18690 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/functional-497660/id_rsa Username:docker}
I1210 22:42:20.705839   18690 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-497660 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-497660
size: "4943877"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"
- id: f1af58d46fafac9b7a6e63ff9d077b88f6bf8967934e97b9b7281f05b44e7424
repoDigests:
- localhost/minikube-local-cache-test@sha256:b4e146d445108f7495d0757c01299c97b3d87fcd7b6911267184c95d68370d57
repoTags:
- localhost/minikube-local-cache-test:functional-497660
size: "3330"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-497660 image ls --format yaml --alsologtostderr:
I1210 22:42:18.022802   18530 out.go:360] Setting OutFile to fd 1 ...
I1210 22:42:18.023026   18530 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:42:18.023034   18530 out.go:374] Setting ErrFile to fd 2...
I1210 22:42:18.023038   18530 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:42:18.023231   18530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
I1210 22:42:18.023770   18530 config.go:182] Loaded profile config "functional-497660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 22:42:18.023857   18530 config.go:182] Loaded profile config "functional-497660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 22:42:18.025839   18530 ssh_runner.go:195] Run: systemctl --version
I1210 22:42:18.027759   18530 main.go:143] libmachine: domain functional-497660 has defined MAC address 52:54:00:26:f5:2e in network mk-functional-497660
I1210 22:42:18.028253   18530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:26:f5:2e", ip: ""} in network mk-functional-497660: {Iface:virbr1 ExpiryTime:2025-12-10 23:38:51 +0000 UTC Type:0 Mac:52:54:00:26:f5:2e Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:functional-497660 Clientid:01:52:54:00:26:f5:2e}
I1210 22:42:18.028286   18530 main.go:143] libmachine: domain functional-497660 has defined IP address 192.168.39.7 and MAC address 52:54:00:26:f5:2e in network mk-functional-497660
I1210 22:42:18.028448   18530 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/functional-497660/id_rsa Username:docker}
I1210 22:42:18.115084   18530 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (7.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497660 ssh pgrep buildkitd: exit status 1 (172.762182ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 image build -t localhost/my-image:functional-497660 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-497660 image build -t localhost/my-image:functional-497660 testdata/build --alsologtostderr: (7.24260566s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-497660 image build -t localhost/my-image:functional-497660 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d146a065e26
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-497660
--> 6f3f6ca9113
Successfully tagged localhost/my-image:functional-497660
6f3f6ca9113514090a906a2c6b1daa501276bbafdb789bf245204b8b8f9f51e1
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-497660 image build -t localhost/my-image:functional-497660 testdata/build --alsologtostderr:
I1210 22:42:18.402828   18572 out.go:360] Setting OutFile to fd 1 ...
I1210 22:42:18.402967   18572 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:42:18.402978   18572 out.go:374] Setting ErrFile to fd 2...
I1210 22:42:18.402984   18572 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:42:18.403275   18572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
I1210 22:42:18.403916   18572 config.go:182] Loaded profile config "functional-497660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 22:42:18.404537   18572 config.go:182] Loaded profile config "functional-497660": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 22:42:18.406775   18572 ssh_runner.go:195] Run: systemctl --version
I1210 22:42:18.409198   18572 main.go:143] libmachine: domain functional-497660 has defined MAC address 52:54:00:26:f5:2e in network mk-functional-497660
I1210 22:42:18.409624   18572 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:26:f5:2e", ip: ""} in network mk-functional-497660: {Iface:virbr1 ExpiryTime:2025-12-10 23:38:51 +0000 UTC Type:0 Mac:52:54:00:26:f5:2e Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:functional-497660 Clientid:01:52:54:00:26:f5:2e}
I1210 22:42:18.409653   18572 main.go:143] libmachine: domain functional-497660 has defined IP address 192.168.39.7 and MAC address 52:54:00:26:f5:2e in network mk-functional-497660
I1210 22:42:18.409790   18572 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/functional-497660/id_rsa Username:docker}
I1210 22:42:18.511935   18572 build_images.go:162] Building image from path: /tmp/build.3848894063.tar
I1210 22:42:18.511988   18572 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1210 22:42:18.538575   18572 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3848894063.tar
I1210 22:42:18.545614   18572 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3848894063.tar: stat -c "%s %y" /var/lib/minikube/build/build.3848894063.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3848894063.tar': No such file or directory
I1210 22:42:18.545655   18572 ssh_runner.go:362] scp /tmp/build.3848894063.tar --> /var/lib/minikube/build/build.3848894063.tar (3072 bytes)
I1210 22:42:18.584906   18572 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3848894063
I1210 22:42:18.602774   18572 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3848894063 -xf /var/lib/minikube/build/build.3848894063.tar
I1210 22:42:18.619173   18572 crio.go:315] Building image: /var/lib/minikube/build/build.3848894063
I1210 22:42:18.619241   18572 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-497660 /var/lib/minikube/build/build.3848894063 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1210 22:42:25.550757   18572 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-497660 /var/lib/minikube/build/build.3848894063 --cgroup-manager=cgroupfs: (6.931489851s)
I1210 22:42:25.550838   18572 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3848894063
I1210 22:42:25.567168   18572 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3848894063.tar
I1210 22:42:25.579928   18572 build_images.go:218] Built localhost/my-image:functional-497660 from /tmp/build.3848894063.tar
I1210 22:42:25.579967   18572 build_images.go:134] succeeded building to: functional-497660
I1210 22:42:25.579972   18572 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (7.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-497660
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "245.134708ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "61.99909ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "307.02895ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "63.642795ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 image load --daemon kicbase/echo-server:functional-497660 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-497660 image load --daemon kicbase/echo-server:functional-497660 --alsologtostderr: (1.1744772s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (8.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-497660 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-497660 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-plpbx" [040c7ec9-cc60-49f6-b97a-2ce27fb2bc1c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-plpbx" [040c7ec9-cc60-49f6-b97a-2ce27fb2bc1c] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003869316s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (8.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 image load --daemon kicbase/echo-server:functional-497660 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-497660
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 image load --daemon kicbase/echo-server:functional-497660 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 image save kicbase/echo-server:functional-497660 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 image rm kicbase/echo-server:functional-497660 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-497660
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 image save --daemon kicbase/echo-server:functional-497660 --alsologtostderr
I1210 22:41:44.957103    9065 detect.go:223] nested VM detected
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-497660
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (28.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-497660 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo590004616/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765406509186015319" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo590004616/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765406509186015319" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo590004616/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765406509186015319" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo590004616/001/test-1765406509186015319
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497660 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (170.990002ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 22:41:49.357302    9065 retry.go:31] will retry after 520.886535ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 10 22:41 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 10 22:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 10 22:41 test-1765406509186015319
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh cat /mount-9p/test-1765406509186015319
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-497660 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [4d50e361-2d6d-4adc-87c5-9e5bb49dc05f] Pending
helpers_test.go:353: "busybox-mount" [4d50e361-2d6d-4adc-87c5-9e5bb49dc05f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [4d50e361-2d6d-4adc-87c5-9e5bb49dc05f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [4d50e361-2d6d-4adc-87c5-9e5bb49dc05f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 26.007357995s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-497660 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-497660 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo590004616/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (28.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 service list -o json
functional_test.go:1504: Took "424.815268ms" to run "out/minikube-linux-amd64 -p functional-497660 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.7:30659
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.7:30659
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-497660 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2446531598/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497660 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (203.643287ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 22:42:17.686170    9065 retry.go:31] will retry after 424.20272ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-497660 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2446531598/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497660 ssh "sudo umount -f /mount-9p": exit status 1 (176.060763ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-497660 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-497660 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2446531598/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-497660 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo344073283/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-497660 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo344073283/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-497660 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo344073283/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-497660 ssh "findmnt -T" /mount1: exit status 1 (246.737813ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 22:42:19.117817    9065 retry.go:31] will retry after 718.574533ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-497660 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-497660 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-497660 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo344073283/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-497660 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo344073283/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-497660 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo344073283/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-497660
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-497660
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-497660
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (215.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1210 22:47:35.028975    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:48:02.738740    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:48:38.667092    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:50:01.731753    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-670744 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m35.132691164s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (215.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-670744 kubectl -- rollout status deployment/busybox: (5.700652674s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 kubectl -- exec busybox-7b57f96db7-5c5pg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 kubectl -- exec busybox-7b57f96db7-fhdr8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 kubectl -- exec busybox-7b57f96db7-pxdgc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 kubectl -- exec busybox-7b57f96db7-5c5pg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 kubectl -- exec busybox-7b57f96db7-fhdr8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 kubectl -- exec busybox-7b57f96db7-pxdgc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 kubectl -- exec busybox-7b57f96db7-5c5pg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 kubectl -- exec busybox-7b57f96db7-fhdr8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 kubectl -- exec busybox-7b57f96db7-pxdgc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 kubectl -- exec busybox-7b57f96db7-5c5pg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 kubectl -- exec busybox-7b57f96db7-5c5pg -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 kubectl -- exec busybox-7b57f96db7-fhdr8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 kubectl -- exec busybox-7b57f96db7-fhdr8 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 kubectl -- exec busybox-7b57f96db7-pxdgc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 kubectl -- exec busybox-7b57f96db7-pxdgc -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (45.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-670744 node add --alsologtostderr -v 5: (44.712461224s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (45.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-670744 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 cp testdata/cp-test.txt ha-670744:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 cp ha-670744:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile814811663/001/cp-test_ha-670744.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 cp ha-670744:/home/docker/cp-test.txt ha-670744-m02:/home/docker/cp-test_ha-670744_ha-670744-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744-m02 "sudo cat /home/docker/cp-test_ha-670744_ha-670744-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 cp ha-670744:/home/docker/cp-test.txt ha-670744-m03:/home/docker/cp-test_ha-670744_ha-670744-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744-m03 "sudo cat /home/docker/cp-test_ha-670744_ha-670744-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 cp ha-670744:/home/docker/cp-test.txt ha-670744-m04:/home/docker/cp-test_ha-670744_ha-670744-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744-m04 "sudo cat /home/docker/cp-test_ha-670744_ha-670744-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 cp testdata/cp-test.txt ha-670744-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 cp ha-670744-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile814811663/001/cp-test_ha-670744-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 cp ha-670744-m02:/home/docker/cp-test.txt ha-670744:/home/docker/cp-test_ha-670744-m02_ha-670744.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744 "sudo cat /home/docker/cp-test_ha-670744-m02_ha-670744.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 cp ha-670744-m02:/home/docker/cp-test.txt ha-670744-m03:/home/docker/cp-test_ha-670744-m02_ha-670744-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744-m03 "sudo cat /home/docker/cp-test_ha-670744-m02_ha-670744-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 cp ha-670744-m02:/home/docker/cp-test.txt ha-670744-m04:/home/docker/cp-test_ha-670744-m02_ha-670744-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744-m04 "sudo cat /home/docker/cp-test_ha-670744-m02_ha-670744-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 cp testdata/cp-test.txt ha-670744-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 cp ha-670744-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile814811663/001/cp-test_ha-670744-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 cp ha-670744-m03:/home/docker/cp-test.txt ha-670744:/home/docker/cp-test_ha-670744-m03_ha-670744.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744 "sudo cat /home/docker/cp-test_ha-670744-m03_ha-670744.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 cp ha-670744-m03:/home/docker/cp-test.txt ha-670744-m02:/home/docker/cp-test_ha-670744-m03_ha-670744-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744-m02 "sudo cat /home/docker/cp-test_ha-670744-m03_ha-670744-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 cp ha-670744-m03:/home/docker/cp-test.txt ha-670744-m04:/home/docker/cp-test_ha-670744-m03_ha-670744-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744-m04 "sudo cat /home/docker/cp-test_ha-670744-m03_ha-670744-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 cp testdata/cp-test.txt ha-670744-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 cp ha-670744-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile814811663/001/cp-test_ha-670744-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 cp ha-670744-m04:/home/docker/cp-test.txt ha-670744:/home/docker/cp-test_ha-670744-m04_ha-670744.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744 "sudo cat /home/docker/cp-test_ha-670744-m04_ha-670744.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 cp ha-670744-m04:/home/docker/cp-test.txt ha-670744-m02:/home/docker/cp-test_ha-670744-m04_ha-670744-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744-m02 "sudo cat /home/docker/cp-test_ha-670744-m04_ha-670744-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 cp ha-670744-m04:/home/docker/cp-test.txt ha-670744-m03:/home/docker/cp-test_ha-670744-m04_ha-670744-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 ssh -n ha-670744-m03 "sudo cat /home/docker/cp-test_ha-670744-m04_ha-670744-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (87.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 node stop m02 --alsologtostderr -v 5
E1210 22:51:38.555173    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:51:38.562190    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:51:38.573635    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:51:38.595129    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:51:38.636655    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:51:38.718387    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:51:38.879987    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:51:39.202140    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:51:39.844237    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:51:41.125563    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:51:43.687773    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:51:48.809627    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:51:59.051248    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:52:19.532770    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:52:35.030567    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:53:00.495581    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-670744 node stop m02 --alsologtostderr -v 5: (1m26.658144411s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-670744 status --alsologtostderr -v 5: exit status 7 (510.562278ms)

                                                
                                                
-- stdout --
	ha-670744
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-670744-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-670744-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-670744-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 22:53:04.034048   22804 out.go:360] Setting OutFile to fd 1 ...
	I1210 22:53:04.034394   22804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:53:04.034409   22804 out.go:374] Setting ErrFile to fd 2...
	I1210 22:53:04.034421   22804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 22:53:04.034628   22804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
	I1210 22:53:04.034826   22804 out.go:368] Setting JSON to false
	I1210 22:53:04.034856   22804 mustload.go:66] Loading cluster: ha-670744
	I1210 22:53:04.034996   22804 notify.go:221] Checking for updates...
	I1210 22:53:04.035352   22804 config.go:182] Loaded profile config "ha-670744": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 22:53:04.035380   22804 status.go:174] checking status of ha-670744 ...
	I1210 22:53:04.037826   22804 status.go:371] ha-670744 host status = "Running" (err=<nil>)
	I1210 22:53:04.037848   22804 host.go:66] Checking if "ha-670744" exists ...
	I1210 22:53:04.040324   22804 main.go:143] libmachine: domain ha-670744 has defined MAC address 52:54:00:e4:45:e4 in network mk-ha-670744
	I1210 22:53:04.040963   22804 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e4:45:e4", ip: ""} in network mk-ha-670744: {Iface:virbr1 ExpiryTime:2025-12-10 23:47:10 +0000 UTC Type:0 Mac:52:54:00:e4:45:e4 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-670744 Clientid:01:52:54:00:e4:45:e4}
	I1210 22:53:04.041003   22804 main.go:143] libmachine: domain ha-670744 has defined IP address 192.168.39.19 and MAC address 52:54:00:e4:45:e4 in network mk-ha-670744
	I1210 22:53:04.041157   22804 host.go:66] Checking if "ha-670744" exists ...
	I1210 22:53:04.041367   22804 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 22:53:04.043602   22804 main.go:143] libmachine: domain ha-670744 has defined MAC address 52:54:00:e4:45:e4 in network mk-ha-670744
	I1210 22:53:04.043960   22804 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e4:45:e4", ip: ""} in network mk-ha-670744: {Iface:virbr1 ExpiryTime:2025-12-10 23:47:10 +0000 UTC Type:0 Mac:52:54:00:e4:45:e4 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-670744 Clientid:01:52:54:00:e4:45:e4}
	I1210 22:53:04.043981   22804 main.go:143] libmachine: domain ha-670744 has defined IP address 192.168.39.19 and MAC address 52:54:00:e4:45:e4 in network mk-ha-670744
	I1210 22:53:04.044122   22804 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/ha-670744/id_rsa Username:docker}
	I1210 22:53:04.137683   22804 ssh_runner.go:195] Run: systemctl --version
	I1210 22:53:04.145366   22804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 22:53:04.163717   22804 kubeconfig.go:125] found "ha-670744" server: "https://192.168.39.254:8443"
	I1210 22:53:04.163756   22804 api_server.go:166] Checking apiserver status ...
	I1210 22:53:04.163791   22804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 22:53:04.185654   22804 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup
	W1210 22:53:04.198152   22804 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1210 22:53:04.198219   22804 ssh_runner.go:195] Run: ls
	I1210 22:53:04.203774   22804 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1210 22:53:04.208396   22804 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1210 22:53:04.208422   22804 status.go:463] ha-670744 apiserver status = Running (err=<nil>)
	I1210 22:53:04.208458   22804 status.go:176] ha-670744 status: &{Name:ha-670744 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 22:53:04.208497   22804 status.go:174] checking status of ha-670744-m02 ...
	I1210 22:53:04.210104   22804 status.go:371] ha-670744-m02 host status = "Stopped" (err=<nil>)
	I1210 22:53:04.210118   22804 status.go:384] host is not running, skipping remaining checks
	I1210 22:53:04.210123   22804 status.go:176] ha-670744-m02 status: &{Name:ha-670744-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 22:53:04.210136   22804 status.go:174] checking status of ha-670744-m03 ...
	I1210 22:53:04.211346   22804 status.go:371] ha-670744-m03 host status = "Running" (err=<nil>)
	I1210 22:53:04.211364   22804 host.go:66] Checking if "ha-670744-m03" exists ...
	I1210 22:53:04.213617   22804 main.go:143] libmachine: domain ha-670744-m03 has defined MAC address 52:54:00:28:6a:c6 in network mk-ha-670744
	I1210 22:53:04.213967   22804 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:28:6a:c6", ip: ""} in network mk-ha-670744: {Iface:virbr1 ExpiryTime:2025-12-10 23:49:13 +0000 UTC Type:0 Mac:52:54:00:28:6a:c6 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-670744-m03 Clientid:01:52:54:00:28:6a:c6}
	I1210 22:53:04.213988   22804 main.go:143] libmachine: domain ha-670744-m03 has defined IP address 192.168.39.140 and MAC address 52:54:00:28:6a:c6 in network mk-ha-670744
	I1210 22:53:04.214112   22804 host.go:66] Checking if "ha-670744-m03" exists ...
	I1210 22:53:04.214314   22804 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 22:53:04.216123   22804 main.go:143] libmachine: domain ha-670744-m03 has defined MAC address 52:54:00:28:6a:c6 in network mk-ha-670744
	I1210 22:53:04.216462   22804 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:28:6a:c6", ip: ""} in network mk-ha-670744: {Iface:virbr1 ExpiryTime:2025-12-10 23:49:13 +0000 UTC Type:0 Mac:52:54:00:28:6a:c6 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-670744-m03 Clientid:01:52:54:00:28:6a:c6}
	I1210 22:53:04.216482   22804 main.go:143] libmachine: domain ha-670744-m03 has defined IP address 192.168.39.140 and MAC address 52:54:00:28:6a:c6 in network mk-ha-670744
	I1210 22:53:04.216594   22804 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/ha-670744-m03/id_rsa Username:docker}
	I1210 22:53:04.298728   22804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 22:53:04.319331   22804 kubeconfig.go:125] found "ha-670744" server: "https://192.168.39.254:8443"
	I1210 22:53:04.319368   22804 api_server.go:166] Checking apiserver status ...
	I1210 22:53:04.319417   22804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 22:53:04.343471   22804 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1860/cgroup
	W1210 22:53:04.358475   22804 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1860/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1210 22:53:04.358558   22804 ssh_runner.go:195] Run: ls
	I1210 22:53:04.364414   22804 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1210 22:53:04.369709   22804 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1210 22:53:04.369730   22804 status.go:463] ha-670744-m03 apiserver status = Running (err=<nil>)
	I1210 22:53:04.369737   22804 status.go:176] ha-670744-m03 status: &{Name:ha-670744-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 22:53:04.369750   22804 status.go:174] checking status of ha-670744-m04 ...
	I1210 22:53:04.371245   22804 status.go:371] ha-670744-m04 host status = "Running" (err=<nil>)
	I1210 22:53:04.371264   22804 host.go:66] Checking if "ha-670744-m04" exists ...
	I1210 22:53:04.373833   22804 main.go:143] libmachine: domain ha-670744-m04 has defined MAC address 52:54:00:37:29:02 in network mk-ha-670744
	I1210 22:53:04.374234   22804 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:29:02", ip: ""} in network mk-ha-670744: {Iface:virbr1 ExpiryTime:2025-12-10 23:50:56 +0000 UTC Type:0 Mac:52:54:00:37:29:02 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-670744-m04 Clientid:01:52:54:00:37:29:02}
	I1210 22:53:04.374254   22804 main.go:143] libmachine: domain ha-670744-m04 has defined IP address 192.168.39.164 and MAC address 52:54:00:37:29:02 in network mk-ha-670744
	I1210 22:53:04.374368   22804 host.go:66] Checking if "ha-670744-m04" exists ...
	I1210 22:53:04.374572   22804 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 22:53:04.376794   22804 main.go:143] libmachine: domain ha-670744-m04 has defined MAC address 52:54:00:37:29:02 in network mk-ha-670744
	I1210 22:53:04.377232   22804 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:29:02", ip: ""} in network mk-ha-670744: {Iface:virbr1 ExpiryTime:2025-12-10 23:50:56 +0000 UTC Type:0 Mac:52:54:00:37:29:02 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-670744-m04 Clientid:01:52:54:00:37:29:02}
	I1210 22:53:04.377254   22804 main.go:143] libmachine: domain ha-670744-m04 has defined IP address 192.168.39.164 and MAC address 52:54:00:37:29:02 in network mk-ha-670744
	I1210 22:53:04.377385   22804 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/ha-670744-m04/id_rsa Username:docker}
	I1210 22:53:04.464061   22804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 22:53:04.482416   22804 status.go:176] ha-670744-m04 status: &{Name:ha-670744-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (87.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (32.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-670744 node start m02 --alsologtostderr -v 5: (31.173553677s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (32.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (293.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 stop --alsologtostderr -v 5
E1210 22:53:38.666332    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:54:22.417911    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-670744 stop --alsologtostderr -v 5: (2m59.257225335s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 start --wait true --alsologtostderr -v 5
E1210 22:56:38.555297    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:57:06.259501    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 22:57:35.028809    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-670744 start --wait true --alsologtostderr -v 5: (1m54.422924056s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (293.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 node delete m03 --alsologtostderr -v 5
E1210 22:58:38.666296    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-670744 node delete m03 --alsologtostderr -v 5: (17.35748566s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (174.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 stop --alsologtostderr -v 5
E1210 22:58:58.100277    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 23:01:38.555669    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-670744 stop --alsologtostderr -v 5: (2m54.559671931s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-670744 status --alsologtostderr -v 5: exit status 7 (61.562354ms)

                                                
                                                
-- stdout --
	ha-670744
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-670744-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-670744-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:01:44.916818   25440 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:01:44.916914   25440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:01:44.916922   25440 out.go:374] Setting ErrFile to fd 2...
	I1210 23:01:44.916927   25440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:01:44.917101   25440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
	I1210 23:01:44.917252   25440 out.go:368] Setting JSON to false
	I1210 23:01:44.917279   25440 mustload.go:66] Loading cluster: ha-670744
	I1210 23:01:44.917384   25440 notify.go:221] Checking for updates...
	I1210 23:01:44.917648   25440 config.go:182] Loaded profile config "ha-670744": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:01:44.917661   25440 status.go:174] checking status of ha-670744 ...
	I1210 23:01:44.919606   25440 status.go:371] ha-670744 host status = "Stopped" (err=<nil>)
	I1210 23:01:44.919620   25440 status.go:384] host is not running, skipping remaining checks
	I1210 23:01:44.919624   25440 status.go:176] ha-670744 status: &{Name:ha-670744 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 23:01:44.919638   25440 status.go:174] checking status of ha-670744-m02 ...
	I1210 23:01:44.920685   25440 status.go:371] ha-670744-m02 host status = "Stopped" (err=<nil>)
	I1210 23:01:44.920697   25440 status.go:384] host is not running, skipping remaining checks
	I1210 23:01:44.920701   25440 status.go:176] ha-670744-m02 status: &{Name:ha-670744-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 23:01:44.920712   25440 status.go:174] checking status of ha-670744-m04 ...
	I1210 23:01:44.921616   25440 status.go:371] ha-670744-m04 host status = "Stopped" (err=<nil>)
	I1210 23:01:44.921627   25440 status.go:384] host is not running, skipping remaining checks
	I1210 23:01:44.921631   25440 status.go:176] ha-670744-m04 status: &{Name:ha-670744-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (174.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (99.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1210 23:02:35.029482    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-670744 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m38.725242792s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (99.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (98.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 node add --control-plane --alsologtostderr -v 5
E1210 23:03:38.666120    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-670744 node add --control-plane --alsologtostderr -v 5: (1m37.930295423s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-670744 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (98.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.1s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-380730 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-380730 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (51.096737183s)
--- PASS: TestJSONOutput/start/Command (51.10s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-380730 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-380730 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-380730 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-380730 --output=json --user=testUser: (6.859168974s)
--- PASS: TestJSONOutput/stop/Command (6.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-896033 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-896033 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (77.371339ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3557b902-f3ae-485c-9f8d-ba4d3e0497b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-896033] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"81d4a891-cabd-4f13-bc81-5a23859d05e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22061"}}
	{"specversion":"1.0","id":"7fe64b2a-c78d-4559-aad0-0a5767ef981a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6251651b-bcc6-4da3-ac92-9042645c6e16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22061-5125/kubeconfig"}}
	{"specversion":"1.0","id":"ded26cd2-1d69-488e-af45-1a41867ddc8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5125/.minikube"}}
	{"specversion":"1.0","id":"22d9d388-d6f0-4f89-8385-7b2a24e097b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"99b99ea2-8d88-483f-a945-ce7a85098f39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1cf5c92f-0019-4953-9c08-4adddea6a627","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-896033" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-896033
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (79.34s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-234258 --driver=kvm2  --container-runtime=crio
E1210 23:06:38.558167    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 23:06:41.735630    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-234258 --driver=kvm2  --container-runtime=crio: (36.703842305s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-236186 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-236186 --driver=kvm2  --container-runtime=crio: (40.037200699s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-234258
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-236186
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-236186" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-236186
helpers_test.go:176: Cleaning up "first-234258" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-234258
--- PASS: TestMinikubeProfile (79.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-176229 --memory=3072 --mount-string /tmp/TestMountStartserial4148862726/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1210 23:07:35.031797    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-176229 --memory=3072 --mount-string /tmp/TestMountStartserial4148862726/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.687907713s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-176229 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-176229 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (20.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-191027 --memory=3072 --mount-string /tmp/TestMountStartserial4148862726/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1210 23:08:01.623071    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-191027 --memory=3072 --mount-string /tmp/TestMountStartserial4148862726/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.467980958s)
--- PASS: TestMountStart/serial/StartWithMountSecond (20.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-191027 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-191027 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-176229 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-191027 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-191027 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-191027
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-191027: (1.276579812s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.59s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-191027
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-191027: (17.589244097s)
--- PASS: TestMountStart/serial/RestartStopped (18.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-191027 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-191027 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (129.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-954539 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1210 23:08:38.666619    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-954539 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m9.518784443s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (129.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954539 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954539 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-954539 -- rollout status deployment/busybox: (4.38906877s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954539 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954539 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954539 -- exec busybox-7b57f96db7-7w7dk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954539 -- exec busybox-7b57f96db7-tmw7w -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954539 -- exec busybox-7b57f96db7-7w7dk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954539 -- exec busybox-7b57f96db7-tmw7w -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954539 -- exec busybox-7b57f96db7-7w7dk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954539 -- exec busybox-7b57f96db7-tmw7w -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.96s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954539 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954539 -- exec busybox-7b57f96db7-7w7dk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954539 -- exec busybox-7b57f96db7-7w7dk -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954539 -- exec busybox-7b57f96db7-tmw7w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-954539 -- exec busybox-7b57f96db7-tmw7w -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-954539 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-954539 -v=5 --alsologtostderr: (41.434407305s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.86s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-954539 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.45s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 cp testdata/cp-test.txt multinode-954539:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 ssh -n multinode-954539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 cp multinode-954539:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile516363858/001/cp-test_multinode-954539.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 ssh -n multinode-954539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 cp multinode-954539:/home/docker/cp-test.txt multinode-954539-m02:/home/docker/cp-test_multinode-954539_multinode-954539-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 ssh -n multinode-954539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 ssh -n multinode-954539-m02 "sudo cat /home/docker/cp-test_multinode-954539_multinode-954539-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 cp multinode-954539:/home/docker/cp-test.txt multinode-954539-m03:/home/docker/cp-test_multinode-954539_multinode-954539-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 ssh -n multinode-954539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 ssh -n multinode-954539-m03 "sudo cat /home/docker/cp-test_multinode-954539_multinode-954539-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 cp testdata/cp-test.txt multinode-954539-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 ssh -n multinode-954539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 cp multinode-954539-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile516363858/001/cp-test_multinode-954539-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 ssh -n multinode-954539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 cp multinode-954539-m02:/home/docker/cp-test.txt multinode-954539:/home/docker/cp-test_multinode-954539-m02_multinode-954539.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 ssh -n multinode-954539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 ssh -n multinode-954539 "sudo cat /home/docker/cp-test_multinode-954539-m02_multinode-954539.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 cp multinode-954539-m02:/home/docker/cp-test.txt multinode-954539-m03:/home/docker/cp-test_multinode-954539-m02_multinode-954539-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 ssh -n multinode-954539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 ssh -n multinode-954539-m03 "sudo cat /home/docker/cp-test_multinode-954539-m02_multinode-954539-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 cp testdata/cp-test.txt multinode-954539-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 ssh -n multinode-954539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 cp multinode-954539-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile516363858/001/cp-test_multinode-954539-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 ssh -n multinode-954539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 cp multinode-954539-m03:/home/docker/cp-test.txt multinode-954539:/home/docker/cp-test_multinode-954539-m03_multinode-954539.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 ssh -n multinode-954539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 ssh -n multinode-954539 "sudo cat /home/docker/cp-test_multinode-954539-m03_multinode-954539.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 cp multinode-954539-m03:/home/docker/cp-test.txt multinode-954539-m02:/home/docker/cp-test_multinode-954539-m03_multinode-954539-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 ssh -n multinode-954539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 ssh -n multinode-954539-m02 "sudo cat /home/docker/cp-test_multinode-954539-m03_multinode-954539-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.97s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-954539 node stop m03: (1.711984373s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-954539 status: exit status 7 (322.540281ms)

                                                
                                                
-- stdout --
	multinode-954539
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-954539-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-954539-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-954539 status --alsologtostderr: exit status 7 (323.666878ms)

                                                
                                                
-- stdout --
	multinode-954539
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-954539-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-954539-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:11:37.562432   31082 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:11:37.562760   31082 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:11:37.562769   31082 out.go:374] Setting ErrFile to fd 2...
	I1210 23:11:37.562778   31082 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:11:37.562936   31082 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
	I1210 23:11:37.563087   31082 out.go:368] Setting JSON to false
	I1210 23:11:37.563111   31082 mustload.go:66] Loading cluster: multinode-954539
	I1210 23:11:37.563182   31082 notify.go:221] Checking for updates...
	I1210 23:11:37.563459   31082 config.go:182] Loaded profile config "multinode-954539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:11:37.563471   31082 status.go:174] checking status of multinode-954539 ...
	I1210 23:11:37.565476   31082 status.go:371] multinode-954539 host status = "Running" (err=<nil>)
	I1210 23:11:37.565494   31082 host.go:66] Checking if "multinode-954539" exists ...
	I1210 23:11:37.568221   31082 main.go:143] libmachine: domain multinode-954539 has defined MAC address 52:54:00:35:43:2a in network mk-multinode-954539
	I1210 23:11:37.568697   31082 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:43:2a", ip: ""} in network mk-multinode-954539: {Iface:virbr1 ExpiryTime:2025-12-11 00:08:45 +0000 UTC Type:0 Mac:52:54:00:35:43:2a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-954539 Clientid:01:52:54:00:35:43:2a}
	I1210 23:11:37.568732   31082 main.go:143] libmachine: domain multinode-954539 has defined IP address 192.168.39.116 and MAC address 52:54:00:35:43:2a in network mk-multinode-954539
	I1210 23:11:37.568953   31082 host.go:66] Checking if "multinode-954539" exists ...
	I1210 23:11:37.569197   31082 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 23:11:37.571517   31082 main.go:143] libmachine: domain multinode-954539 has defined MAC address 52:54:00:35:43:2a in network mk-multinode-954539
	I1210 23:11:37.571965   31082 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:43:2a", ip: ""} in network mk-multinode-954539: {Iface:virbr1 ExpiryTime:2025-12-11 00:08:45 +0000 UTC Type:0 Mac:52:54:00:35:43:2a Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-954539 Clientid:01:52:54:00:35:43:2a}
	I1210 23:11:37.571998   31082 main.go:143] libmachine: domain multinode-954539 has defined IP address 192.168.39.116 and MAC address 52:54:00:35:43:2a in network mk-multinode-954539
	I1210 23:11:37.572165   31082 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/multinode-954539/id_rsa Username:docker}
	I1210 23:11:37.653557   31082 ssh_runner.go:195] Run: systemctl --version
	I1210 23:11:37.660495   31082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:11:37.677608   31082 kubeconfig.go:125] found "multinode-954539" server: "https://192.168.39.116:8443"
	I1210 23:11:37.677643   31082 api_server.go:166] Checking apiserver status ...
	I1210 23:11:37.677683   31082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:11:37.698262   31082 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1436/cgroup
	W1210 23:11:37.709750   31082 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1436/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1210 23:11:37.709817   31082 ssh_runner.go:195] Run: ls
	I1210 23:11:37.714945   31082 api_server.go:253] Checking apiserver healthz at https://192.168.39.116:8443/healthz ...
	I1210 23:11:37.720329   31082 api_server.go:279] https://192.168.39.116:8443/healthz returned 200:
	ok
	I1210 23:11:37.720363   31082 status.go:463] multinode-954539 apiserver status = Running (err=<nil>)
	I1210 23:11:37.720377   31082 status.go:176] multinode-954539 status: &{Name:multinode-954539 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 23:11:37.720400   31082 status.go:174] checking status of multinode-954539-m02 ...
	I1210 23:11:37.722032   31082 status.go:371] multinode-954539-m02 host status = "Running" (err=<nil>)
	I1210 23:11:37.722050   31082 host.go:66] Checking if "multinode-954539-m02" exists ...
	I1210 23:11:37.724338   31082 main.go:143] libmachine: domain multinode-954539-m02 has defined MAC address 52:54:00:a4:c5:3d in network mk-multinode-954539
	I1210 23:11:37.724749   31082 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:3d", ip: ""} in network mk-multinode-954539: {Iface:virbr1 ExpiryTime:2025-12-11 00:10:10 +0000 UTC Type:0 Mac:52:54:00:a4:c5:3d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:multinode-954539-m02 Clientid:01:52:54:00:a4:c5:3d}
	I1210 23:11:37.724774   31082 main.go:143] libmachine: domain multinode-954539-m02 has defined IP address 192.168.39.136 and MAC address 52:54:00:a4:c5:3d in network mk-multinode-954539
	I1210 23:11:37.724931   31082 host.go:66] Checking if "multinode-954539-m02" exists ...
	I1210 23:11:37.725190   31082 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 23:11:37.727100   31082 main.go:143] libmachine: domain multinode-954539-m02 has defined MAC address 52:54:00:a4:c5:3d in network mk-multinode-954539
	I1210 23:11:37.727506   31082 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:3d", ip: ""} in network mk-multinode-954539: {Iface:virbr1 ExpiryTime:2025-12-11 00:10:10 +0000 UTC Type:0 Mac:52:54:00:a4:c5:3d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:multinode-954539-m02 Clientid:01:52:54:00:a4:c5:3d}
	I1210 23:11:37.727530   31082 main.go:143] libmachine: domain multinode-954539-m02 has defined IP address 192.168.39.136 and MAC address 52:54:00:a4:c5:3d in network mk-multinode-954539
	I1210 23:11:37.727645   31082 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/multinode-954539-m02/id_rsa Username:docker}
	I1210 23:11:37.811140   31082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:11:37.826359   31082 status.go:176] multinode-954539-m02 status: &{Name:multinode-954539-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1210 23:11:37.826404   31082 status.go:174] checking status of multinode-954539-m03 ...
	I1210 23:11:37.827908   31082 status.go:371] multinode-954539-m03 host status = "Stopped" (err=<nil>)
	I1210 23:11:37.827925   31082 status.go:384] host is not running, skipping remaining checks
	I1210 23:11:37.827930   31082 status.go:176] multinode-954539-m03 status: &{Name:multinode-954539-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.36s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 node start m03 -v=5 --alsologtostderr
E1210 23:11:38.555601    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-954539 node start m03 -v=5 --alsologtostderr: (37.318392212s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (290.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-954539
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-954539
E1210 23:12:35.034553    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 23:13:38.666146    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-954539: (2m48.705506238s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-954539 --wait=true -v=5 --alsologtostderr
E1210 23:15:38.101965    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 23:16:38.555881    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-954539 --wait=true -v=5 --alsologtostderr: (2m1.64537298s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-954539
--- PASS: TestMultiNode/serial/RestartKeepsNodes (290.47s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-954539 node delete m03: (2.168654419s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (165.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 stop
E1210 23:17:35.033959    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 23:18:38.666573    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-954539 stop: (2m45.831496533s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-954539 status: exit status 7 (63.397408ms)

                                                
                                                
-- stdout --
	multinode-954539
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-954539-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-954539 status --alsologtostderr: exit status 7 (67.409882ms)

                                                
                                                
-- stdout --
	multinode-954539
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-954539-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:19:54.716077   33781 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:19:54.716179   33781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:19:54.716184   33781 out.go:374] Setting ErrFile to fd 2...
	I1210 23:19:54.716188   33781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:19:54.716375   33781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
	I1210 23:19:54.716550   33781 out.go:368] Setting JSON to false
	I1210 23:19:54.716575   33781 mustload.go:66] Loading cluster: multinode-954539
	I1210 23:19:54.716732   33781 notify.go:221] Checking for updates...
	I1210 23:19:54.716978   33781 config.go:182] Loaded profile config "multinode-954539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:19:54.716998   33781 status.go:174] checking status of multinode-954539 ...
	I1210 23:19:54.719599   33781 status.go:371] multinode-954539 host status = "Stopped" (err=<nil>)
	I1210 23:19:54.719620   33781 status.go:384] host is not running, skipping remaining checks
	I1210 23:19:54.719626   33781 status.go:176] multinode-954539 status: &{Name:multinode-954539 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 23:19:54.719643   33781 status.go:174] checking status of multinode-954539-m02 ...
	I1210 23:19:54.721124   33781 status.go:371] multinode-954539-m02 host status = "Stopped" (err=<nil>)
	I1210 23:19:54.721140   33781 status.go:384] host is not running, skipping remaining checks
	I1210 23:19:54.721145   33781 status.go:176] multinode-954539-m02 status: &{Name:multinode-954539-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (165.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (93.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-954539 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-954539 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m32.56624396s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-954539 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (93.03s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-954539
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-954539-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-954539-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (77.807683ms)

                                                
                                                
-- stdout --
	* [multinode-954539-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-5125/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5125/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-954539-m02' is duplicated with machine name 'multinode-954539-m02' in profile 'multinode-954539'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-954539-m03 --driver=kvm2  --container-runtime=crio
E1210 23:21:38.558471    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-954539-m03 --driver=kvm2  --container-runtime=crio: (40.64537525s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-954539
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-954539: exit status 80 (200.197835ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-954539 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-954539-m03 already exists in multinode-954539-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-954539-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.79s)

                                                
                                    
x
+
TestScheduledStopUnix (107.5s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-584690 --memory=3072 --driver=kvm2  --container-runtime=crio
E1210 23:24:41.624579    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-584690 --memory=3072 --driver=kvm2  --container-runtime=crio: (35.896203321s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-584690 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 23:25:16.107514   36147 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:25:16.107685   36147 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:25:16.107697   36147 out.go:374] Setting ErrFile to fd 2...
	I1210 23:25:16.107702   36147 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:25:16.107902   36147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
	I1210 23:25:16.108171   36147 out.go:368] Setting JSON to false
	I1210 23:25:16.108283   36147 mustload.go:66] Loading cluster: scheduled-stop-584690
	I1210 23:25:16.109344   36147 config.go:182] Loaded profile config "scheduled-stop-584690": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:25:16.109507   36147 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/scheduled-stop-584690/config.json ...
	I1210 23:25:16.109957   36147 mustload.go:66] Loading cluster: scheduled-stop-584690
	I1210 23:25:16.110111   36147 config.go:182] Loaded profile config "scheduled-stop-584690": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-584690 -n scheduled-stop-584690
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-584690 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 23:25:16.392085   36192 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:25:16.392310   36192 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:25:16.392319   36192 out.go:374] Setting ErrFile to fd 2...
	I1210 23:25:16.392324   36192 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:25:16.392525   36192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
	I1210 23:25:16.392754   36192 out.go:368] Setting JSON to false
	I1210 23:25:16.392948   36192 daemonize_unix.go:73] killing process 36181 as it is an old scheduled stop
	I1210 23:25:16.393044   36192 mustload.go:66] Loading cluster: scheduled-stop-584690
	I1210 23:25:16.393532   36192 config.go:182] Loaded profile config "scheduled-stop-584690": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:25:16.393634   36192 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/scheduled-stop-584690/config.json ...
	I1210 23:25:16.393864   36192 mustload.go:66] Loading cluster: scheduled-stop-584690
	I1210 23:25:16.394011   36192 config.go:182] Loaded profile config "scheduled-stop-584690": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1210 23:25:16.398104    9065 retry.go:31] will retry after 117.204µs: open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/scheduled-stop-584690/pid: no such file or directory
I1210 23:25:16.399269    9065 retry.go:31] will retry after 153.497µs: open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/scheduled-stop-584690/pid: no such file or directory
I1210 23:25:16.400454    9065 retry.go:31] will retry after 329.5µs: open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/scheduled-stop-584690/pid: no such file or directory
I1210 23:25:16.401589    9065 retry.go:31] will retry after 387.769µs: open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/scheduled-stop-584690/pid: no such file or directory
I1210 23:25:16.402730    9065 retry.go:31] will retry after 638.425µs: open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/scheduled-stop-584690/pid: no such file or directory
I1210 23:25:16.403862    9065 retry.go:31] will retry after 609.615µs: open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/scheduled-stop-584690/pid: no such file or directory
I1210 23:25:16.405022    9065 retry.go:31] will retry after 1.43357ms: open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/scheduled-stop-584690/pid: no such file or directory
I1210 23:25:16.407243    9065 retry.go:31] will retry after 1.997362ms: open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/scheduled-stop-584690/pid: no such file or directory
I1210 23:25:16.409484    9065 retry.go:31] will retry after 3.239064ms: open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/scheduled-stop-584690/pid: no such file or directory
I1210 23:25:16.413711    9065 retry.go:31] will retry after 5.602712ms: open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/scheduled-stop-584690/pid: no such file or directory
I1210 23:25:16.420010    9065 retry.go:31] will retry after 5.277355ms: open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/scheduled-stop-584690/pid: no such file or directory
I1210 23:25:16.426275    9065 retry.go:31] will retry after 9.498973ms: open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/scheduled-stop-584690/pid: no such file or directory
I1210 23:25:16.436518    9065 retry.go:31] will retry after 10.29298ms: open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/scheduled-stop-584690/pid: no such file or directory
I1210 23:25:16.447810    9065 retry.go:31] will retry after 29.047752ms: open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/scheduled-stop-584690/pid: no such file or directory
I1210 23:25:16.477061    9065 retry.go:31] will retry after 29.050064ms: open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/scheduled-stop-584690/pid: no such file or directory
I1210 23:25:16.506255    9065 retry.go:31] will retry after 22.651563ms: open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/scheduled-stop-584690/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-584690 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-584690 -n scheduled-stop-584690
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-584690
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-584690 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 23:25:42.099070   36355 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:25:42.099195   36355 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:25:42.099207   36355 out.go:374] Setting ErrFile to fd 2...
	I1210 23:25:42.099212   36355 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:25:42.099417   36355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
	I1210 23:25:42.099684   36355 out.go:368] Setting JSON to false
	I1210 23:25:42.099784   36355 mustload.go:66] Loading cluster: scheduled-stop-584690
	I1210 23:25:42.100145   36355 config.go:182] Loaded profile config "scheduled-stop-584690": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:25:42.100230   36355 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/scheduled-stop-584690/config.json ...
	I1210 23:25:42.100475   36355 mustload.go:66] Loading cluster: scheduled-stop-584690
	I1210 23:25:42.100597   36355 config.go:182] Loaded profile config "scheduled-stop-584690": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-584690
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-584690: exit status 7 (61.586929ms)

                                                
                                                
-- stdout --
	scheduled-stop-584690
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-584690 -n scheduled-stop-584690
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-584690 -n scheduled-stop-584690: exit status 7 (60.052978ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-584690" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-584690
--- PASS: TestScheduledStopUnix (107.50s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (372.13s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.39836279 start -p running-upgrade-334703 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.39836279 start -p running-upgrade-334703 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (56.823673247s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-334703 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-334703 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (5m11.197558815s)
helpers_test.go:176: Cleaning up "running-upgrade-334703" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-334703
--- PASS: TestRunningBinaryUpgrade (372.13s)

                                                
                                    
x
+
TestKubernetesUpgrade (148.75s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-500118 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-500118 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.100960559s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-500118
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-500118: (2.062523541s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-500118 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-500118 status --format={{.Host}}: exit status 7 (68.397787ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-500118 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-500118 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (50.606328523s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-500118 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-500118 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-500118 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (92.526137ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-500118] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-5125/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5125/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-500118
	    minikube start -p kubernetes-upgrade-500118 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5001182 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-500118 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-500118 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-500118 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.735311391s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-500118" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-500118
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-500118: (1.007860252s)
--- PASS: TestKubernetesUpgrade (148.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-614441 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-614441 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (101.741833ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-614441] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-5125/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5125/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (95.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-614441 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1210 23:26:38.556058    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-614441 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m34.814655644s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-614441 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (95.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (113.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3778510399 start -p stopped-upgrade-878419 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3778510399 start -p stopped-upgrade-878419 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m6.992567365s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3778510399 -p stopped-upgrade-878419 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3778510399 -p stopped-upgrade-878419 stop: (1.921404589s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-878419 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-878419 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.564723777s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (113.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-614441 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-614441 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (4.79865032s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-614441 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-614441 status -o json: exit status 2 (211.782272ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-614441","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-614441
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (5.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (19.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-614441 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-614441 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (19.58363729s)
--- PASS: TestNoKubernetes/serial/Start (19.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22061-5125/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-614441 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-614441 "sudo systemctl is-active --quiet service kubelet": exit status 1 (161.215339ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-614441
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-614441: (1.347806717s)
--- PASS: TestNoKubernetes/serial/Stop (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (49.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-614441 --driver=kvm2  --container-runtime=crio
E1210 23:28:38.666422    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-614441 --driver=kvm2  --container-runtime=crio: (49.382621262s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (49.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-614441 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-614441 "sudo systemctl is-active --quiet service kubelet": exit status 1 (174.02752ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-878419
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-878419: (1.421881439s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.42s)

                                                
                                    
x
+
TestPause/serial/Start (93.2s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-327364 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-327364 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m33.194920944s)
--- PASS: TestPause/serial/Start (93.20s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (37.74s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-327364 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-327364 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (37.711924752s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (37.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-571190 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-571190 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (135.153728ms)

                                                
                                                
-- stdout --
	* [false-571190] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-5125/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5125/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:31:30.256890   41455 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:31:30.257164   41455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:31:30.257174   41455 out.go:374] Setting ErrFile to fd 2...
	I1210 23:31:30.257178   41455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:31:30.257372   41455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
	I1210 23:31:30.257969   41455 out.go:368] Setting JSON to false
	I1210 23:31:30.258971   41455 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4431,"bootTime":1765405059,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 23:31:30.259030   41455 start.go:143] virtualization: kvm guest
	I1210 23:31:30.261118   41455 out.go:179] * [false-571190] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 23:31:30.262614   41455 notify.go:221] Checking for updates...
	I1210 23:31:30.262646   41455 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 23:31:30.263906   41455 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:31:30.265530   41455 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-5125/kubeconfig
	I1210 23:31:30.266774   41455 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5125/.minikube
	I1210 23:31:30.268184   41455 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 23:31:30.269673   41455 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 23:31:30.271415   41455 config.go:182] Loaded profile config "force-systemd-flag-839649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:31:30.271618   41455 config.go:182] Loaded profile config "pause-327364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:31:30.271773   41455 config.go:182] Loaded profile config "running-upgrade-334703": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1210 23:31:30.271893   41455 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:31:30.310341   41455 out.go:179] * Using the kvm2 driver based on user configuration
	I1210 23:31:30.311627   41455 start.go:309] selected driver: kvm2
	I1210 23:31:30.311648   41455 start.go:927] validating driver "kvm2" against <nil>
	I1210 23:31:30.311664   41455 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 23:31:30.314151   41455 out.go:203] 
	W1210 23:31:30.315383   41455 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1210 23:31:30.316663   41455 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-571190 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-571190

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-571190

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-571190

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-571190

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-571190

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-571190

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-571190

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-571190

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-571190

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-571190

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-571190

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-571190" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-571190" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22061-5125/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 23:30:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.37:8443
name: pause-327364
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22061-5125/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 23:30:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.82:8443
name: running-upgrade-334703
contexts:
- context:
cluster: pause-327364
extensions:
- extension:
last-update: Wed, 10 Dec 2025 23:30:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-327364
name: pause-327364
- context:
cluster: running-upgrade-334703
user: running-upgrade-334703
name: running-upgrade-334703
current-context: ""
kind: Config
users:
- name: pause-327364
user:
client-certificate: /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/pause-327364/client.crt
client-key: /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/pause-327364/client.key
- name: running-upgrade-334703
user:
client-certificate: /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/running-upgrade-334703/client.crt
client-key: /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/running-upgrade-334703/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-571190

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571190"

                                                
                                                
----------------------- debugLogs end: false-571190 [took: 4.498912986s] --------------------------------
helpers_test.go:176: Cleaning up "false-571190" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-571190
--- PASS: TestNetworkPlugins/group/false (4.82s)

                                                
                                    
x
+
TestISOImage/Setup (19.56s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-072430 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-072430 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.559022859s)
--- PASS: TestISOImage/Setup (19.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (67.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-805969 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-805969 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m7.151435329s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (67.15s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-072430 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-072430 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-072430 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-072430 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-072430 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-072430 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-072430 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-072430 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-072430 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-072430 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-072430 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (113.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-580587 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-580587 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m53.891230299s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (113.89s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-327364 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.21s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-327364 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-327364 --output=json --layout=cluster: exit status 2 (211.485045ms)

                                                
                                                
-- stdout --
	{"Name":"pause-327364","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-327364","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.21s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-327364 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.63s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.82s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-327364 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.82s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.83s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-327364 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.83s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.83s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1210 23:32:18.104200    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (14.825419753s)
--- PASS: TestPause/serial/VerifyDeletedResources (14.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (101.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-346344 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1210 23:32:35.029410    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-346344 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m41.189557584s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (101.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-805969 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [4ad3d87c-a91a-4ce7-b835-3d8b98bc9dcd] Pending
helpers_test.go:353: "busybox" [4ad3d87c-a91a-4ce7-b835-3d8b98bc9dcd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [4ad3d87c-a91a-4ce7-b835-3d8b98bc9dcd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004614422s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-805969 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-805969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-805969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.69247146s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-805969 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (88.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-805969 --alsologtostderr -v=3
E1210 23:33:38.666819    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-805969 --alsologtostderr -v=3: (1m28.49396154s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (88.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-580587 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [49833f78-7f72-4366-8b63-d7aa3966407c] Pending
helpers_test.go:353: "busybox" [49833f78-7f72-4366-8b63-d7aa3966407c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [49833f78-7f72-4366-8b63-d7aa3966407c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005769958s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-580587 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-346344 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [868ecb21-ef2b-414e-982c-937b1db52543] Pending
helpers_test.go:353: "busybox" [868ecb21-ef2b-414e-982c-937b1db52543] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [868ecb21-ef2b-414e-982c-937b1db52543] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004559644s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-346344 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-580587 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-580587 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.071080489s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-580587 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (84.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-580587 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-580587 --alsologtostderr -v=3: (1m24.932778551s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (84.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-346344 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-346344 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (87.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-346344 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-346344 --alsologtostderr -v=3: (1m27.368745251s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (87.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-805969 -n old-k8s-version-805969
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-805969 -n old-k8s-version-805969: exit status 7 (58.952916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-805969 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (38.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-805969 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-805969 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (38.338397264s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-805969 -n old-k8s-version-805969
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (38.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (18.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-j8tqf" [f687dccc-23ac-413d-b9cd-c269332c798e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-j8tqf" [f687dccc-23ac-413d-b9cd-c269332c798e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.004788579s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (18.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-580587 -n no-preload-580587
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-580587 -n no-preload-580587: exit status 7 (69.292008ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-580587 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (52.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-580587 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-580587 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (51.71608707s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-580587 -n no-preload-580587
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (52.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-j8tqf" [f687dccc-23ac-413d-b9cd-c269332c798e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003634329s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-805969 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (90.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-927242 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-927242 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m30.30828009s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (90.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-805969 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-805969 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-805969 -n old-k8s-version-805969
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-805969 -n old-k8s-version-805969: exit status 2 (230.340839ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-805969 -n old-k8s-version-805969
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-805969 -n old-k8s-version-805969: exit status 2 (225.845632ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-805969 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-805969 -n old-k8s-version-805969
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-805969 -n old-k8s-version-805969
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (63.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-944234 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-944234 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m3.564284839s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (63.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-346344 -n embed-certs-346344
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-346344 -n embed-certs-346344: exit status 7 (62.629666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-346344 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (86.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-346344 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-346344 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m26.506892733s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-346344 -n embed-certs-346344
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (86.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-lmknd" [4a6bafb4-72a0-4be0-b750-2e3b10cb7563] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-lmknd" [4a6bafb4-72a0-4be0-b750-2e3b10cb7563] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.003942714s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-lmknd" [4a6bafb4-72a0-4be0-b750-2e3b10cb7563] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003510771s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-580587 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-580587 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-580587 --alsologtostderr -v=1
E1210 23:36:38.555205    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-580587 -n no-preload-580587
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-580587 -n no-preload-580587: exit status 2 (231.032057ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-580587 -n no-preload-580587
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-580587 -n no-preload-580587: exit status 2 (225.33645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-580587 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-580587 -n no-preload-580587
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-580587 -n no-preload-580587
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (60.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-571190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-571190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m0.245681029s)
--- PASS: TestNetworkPlugins/group/auto/Start (60.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-944234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-944234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.142226956s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-944234 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-944234 --alsologtostderr -v=3: (7.204919488s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-944234 -n newest-cni-944234
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-944234 -n newest-cni-944234: exit status 7 (86.995306ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-944234 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (42.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-944234 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-944234 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (42.501356276s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-944234 -n newest-cni-944234
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (42.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-927242 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [1fab871f-92a9-42d7-a4cf-5eaa0a805bf2] Pending
helpers_test.go:353: "busybox" [1fab871f-92a9-42d7-a4cf-5eaa0a805bf2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [1fab871f-92a9-42d7-a4cf-5eaa0a805bf2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.005331524s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-927242 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-khtb7" [93cd8918-4917-4346-9536-890c98f4d898] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006073842s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-927242 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-927242 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.101846078s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-927242 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (85.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-927242 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-927242 --alsologtostderr -v=3: (1m25.470378911s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (85.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-khtb7" [93cd8918-4917-4346-9536-890c98f4d898] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004870731s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-346344 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-346344 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-346344 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-346344 -n embed-certs-346344
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-346344 -n embed-certs-346344: exit status 2 (226.677524ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-346344 -n embed-certs-346344
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-346344 -n embed-certs-346344: exit status 2 (227.390274ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-346344 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-346344 -n embed-certs-346344
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-346344 -n embed-certs-346344
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (60.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-571190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E1210 23:37:35.029597    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-820240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-571190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m0.331909725s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (60.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-944234 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-944234 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-944234 -n newest-cni-944234
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-944234 -n newest-cni-944234: exit status 2 (211.761827ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-944234 -n newest-cni-944234
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-944234 -n newest-cni-944234: exit status 2 (218.056128ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-944234 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-944234 -n newest-cni-944234
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-944234 -n newest-cni-944234
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (95.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-571190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-571190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m35.825433938s)
--- PASS: TestNetworkPlugins/group/calico/Start (95.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-571190 "pgrep -a kubelet"
I1210 23:37:43.078330    9065 config.go:182] Loaded profile config "auto-571190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-571190 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-zhrnv" [7a12a2b3-85b5-426d-8016-ee0fa784860b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-zhrnv" [7a12a2b3-85b5-426d-8016-ee0fa784860b] Running
E1210 23:37:52.050782    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/old-k8s-version-805969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 23:37:52.057201    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/old-k8s-version-805969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 23:37:52.068604    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/old-k8s-version-805969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 23:37:52.090073    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/old-k8s-version-805969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 23:37:52.131552    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/old-k8s-version-805969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 23:37:52.212992    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/old-k8s-version-805969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 23:37:52.374905    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/old-k8s-version-805969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 23:37:52.696574    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/old-k8s-version-805969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 23:37:53.338027    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/old-k8s-version-805969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.249166289s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-571190 exec deployment/netcat -- nslookup kubernetes.default
E1210 23:37:54.619773    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/old-k8s-version-805969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-571190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-571190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-571190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1210 23:38:12.545129    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/old-k8s-version-805969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-571190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m10.770482308s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-dvx7x" [98fb2cc9-df3b-44a7-8f90-4de1dd40b20b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005403919s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-571190 "pgrep -a kubelet"
I1210 23:38:31.474609    9065 config.go:182] Loaded profile config "kindnet-571190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-571190 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-t7nnh" [a62c40f1-3579-4684-a4a5-50f44be6a1dd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1210 23:38:33.027114    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/old-k8s-version-805969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 23:38:38.666633    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-t7nnh" [a62c40f1-3579-4684-a4a5-50f44be6a1dd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.005026922s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-927242 -n default-k8s-diff-port-927242
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-927242 -n default-k8s-diff-port-927242: exit status 7 (68.161832ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-927242 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-927242 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-927242 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (45.089626748s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-927242 -n default-k8s-diff-port-927242
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-571190 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-571190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-571190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (82.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-571190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1210 23:39:02.847335    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/no-preload-580587/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 23:39:13.089728    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/no-preload-580587/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 23:39:13.988704    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/old-k8s-version-805969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-571190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m22.327246183s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (82.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-js9px" [da8527af-86d2-4a6e-b500-b8df823fdc78] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005606584s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-571190 "pgrep -a kubelet"
I1210 23:39:20.570944    9065 config.go:182] Loaded profile config "custom-flannel-571190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-571190 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-9qf9h" [f56fc669-784a-4a51-bb39-c1e8cabe70bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-9qf9h" [f56fc669-784a-4a51-bb39-c1e8cabe70bd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.105051486s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-571190 "pgrep -a kubelet"
I1210 23:39:24.046151    9065 config.go:182] Loaded profile config "calico-571190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-571190 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-tbspf" [f80ec1dc-608a-4081-91c4-1658b27f69c5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-tbspf" [f80ec1dc-608a-4081-91c4-1658b27f69c5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.005640522s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-v8svd" [2d006b1a-dce0-4547-a1fe-ebdcf03462ce] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-v8svd" [2d006b1a-dce0-4547-a1fe-ebdcf03462ce] Running
E1210 23:39:33.571535    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/no-preload-580587/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004428547s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-571190 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-571190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-571190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-v8svd" [2d006b1a-dce0-4547-a1fe-ebdcf03462ce] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004308953s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-927242 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-571190 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-571190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-571190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-927242 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-927242 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-927242 --alsologtostderr -v=1: (1.012124368s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-927242 -n default-k8s-diff-port-927242
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-927242 -n default-k8s-diff-port-927242: exit status 2 (250.492922ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-927242 -n default-k8s-diff-port-927242
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-927242 -n default-k8s-diff-port-927242: exit status 2 (256.643617ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-927242 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-927242 -n default-k8s-diff-port-927242
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-927242 -n default-k8s-diff-port-927242
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (66.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-571190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-571190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m6.416816437s)
--- PASS: TestNetworkPlugins/group/flannel/Start (66.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (95.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-571190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-571190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m35.561429347s)
--- PASS: TestNetworkPlugins/group/bridge/Start (95.56s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-072430 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-072430 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-072430 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-072430 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-072430 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-072430 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-072430 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.18s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.18s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-072430 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: 0d7c1d9864cc7aa82e32494e32331ce8be405026
iso_test.go:118:   iso_version: v1.37.0-1765151505-21409
iso_test.go:118:   kicbase_version: v0.0.48-1764843390-22032
--- PASS: TestISOImage/VersionJSON (0.18s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.17s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-072430 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.17s)
E1210 23:40:01.739845    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 23:40:14.533294    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/no-preload-580587/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-571190 "pgrep -a kubelet"
I1210 23:40:22.665642    9065 config.go:182] Loaded profile config "enable-default-cni-571190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-571190 replace --force -f testdata/netcat-deployment.yaml
I1210 23:40:23.678226    9065 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1210 23:40:23.916176    9065 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-8wcln" [bd74e586-0914-4c3c-8368-be98878c9a9f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-8wcln" [bd74e586-0914-4c3c-8368-be98878c9a9f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004267953s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-571190 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-571190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-571190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-cxcdg" [2007ecc9-1eb7-4ad2-8acd-4cc928b47479] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003506108s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-571190 "pgrep -a kubelet"
I1210 23:40:59.738687    9065 config.go:182] Loaded profile config "flannel-571190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-571190 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-zrjdn" [0d8e5852-3151-4c5f-9a0c-af4d0fd00a81] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-zrjdn" [0d8e5852-3151-4c5f-9a0c-af4d0fd00a81] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004190291s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-571190 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-571190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-571190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-571190 "pgrep -a kubelet"
I1210 23:41:24.182344    9065 config.go:182] Loaded profile config "bridge-571190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-571190 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-ldp6g" [1bf29ddb-a829-4317-b1a1-ce1897b68a18] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-ldp6g" [1bf29ddb-a829-4317-b1a1-ce1897b68a18] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004963204s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-571190 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-571190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-571190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (52/437)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.3
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
128 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
131 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService 0.01
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0.01
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
374 TestStartStop/group/disable-driver-mounts 0.18
381 TestNetworkPlugins/group/kubenet 3.86
389 TestNetworkPlugins/group/cilium 3.98
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-462156 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-519123" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-519123
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-571190 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-571190

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-571190

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-571190

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-571190

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-571190

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-571190

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-571190

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-571190

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-571190

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-571190

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-571190

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-571190" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-571190" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22061-5125/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 23:30:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.37:8443
name: pause-327364
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22061-5125/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 23:30:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.82:8443
name: running-upgrade-334703
contexts:
- context:
cluster: pause-327364
extensions:
- extension:
last-update: Wed, 10 Dec 2025 23:30:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-327364
name: pause-327364
- context:
cluster: running-upgrade-334703
user: running-upgrade-334703
name: running-upgrade-334703
current-context: ""
kind: Config
users:
- name: pause-327364
user:
client-certificate: /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/pause-327364/client.crt
client-key: /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/pause-327364/client.key
- name: running-upgrade-334703
user:
client-certificate: /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/running-upgrade-334703/client.crt
client-key: /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/running-upgrade-334703/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-571190

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571190"

                                                
                                                
----------------------- debugLogs end: kubenet-571190 [took: 3.634909275s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-571190" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-571190
--- SKIP: TestNetworkPlugins/group/kubenet (3.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E1210 23:31:38.555730    9065 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/functional-497660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic.go:615: 
----------------------- debugLogs start: cilium-571190 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-571190

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-571190

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-571190

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-571190

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-571190

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-571190

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-571190

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-571190

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-571190

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-571190

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-571190

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-571190" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-571190

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-571190

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-571190

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-571190

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-571190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-571190" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22061-5125/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 23:30:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.37:8443
name: pause-327364
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22061-5125/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 23:30:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.82:8443
name: running-upgrade-334703
contexts:
- context:
cluster: pause-327364
extensions:
- extension:
last-update: Wed, 10 Dec 2025 23:30:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-327364
name: pause-327364
- context:
cluster: running-upgrade-334703
user: running-upgrade-334703
name: running-upgrade-334703
current-context: ""
kind: Config
users:
- name: pause-327364
user:
client-certificate: /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/pause-327364/client.crt
client-key: /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/pause-327364/client.key
- name: running-upgrade-334703
user:
client-certificate: /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/running-upgrade-334703/client.crt
client-key: /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/running-upgrade-334703/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-571190

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-571190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571190"

                                                
                                                
----------------------- debugLogs end: cilium-571190 [took: 3.817652104s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-571190" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-571190
--- SKIP: TestNetworkPlugins/group/cilium (3.98s)

                                                
                                    
Copied to clipboard