Test Report: KVM_Linux_crio 22179

                    
                      505b1c9a8fd96db2c5d776a2dde7c3c6efd2d048:2025-12-21:42914
                    
                

Test fail (15/435)

x
+
TestAddons/parallel/Ingress (153.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-659513 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-659513 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-659513 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [33f3ec72-704c-4201-8ff2-47eac4b359fe] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [33f3ec72-704c-4201-8ff2-47eac4b359fe] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004493789s
I1221 19:49:17.955805  126345 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-659513 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-659513 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.596557913s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-659513 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-659513 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.39.164
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-659513 -n addons-659513
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-659513 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-659513 logs -n 25: (1.261650008s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-836309                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-836309 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ start   │ --download-only -p binary-mirror-061430 --alsologtostderr --binary-mirror http://127.0.0.1:41125 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-061430 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │                     │
	│ delete  │ -p binary-mirror-061430                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-061430 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ addons  │ enable dashboard -p addons-659513                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-659513        │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │                     │
	│ addons  │ disable dashboard -p addons-659513                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-659513        │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │                     │
	│ start   │ -p addons-659513 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-659513        │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:48 UTC │
	│ addons  │ addons-659513 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-659513        │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │ 21 Dec 25 19:48 UTC │
	│ addons  │ addons-659513 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-659513        │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │ 21 Dec 25 19:48 UTC │
	│ addons  │ enable headlamp -p addons-659513 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-659513        │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │ 21 Dec 25 19:48 UTC │
	│ addons  │ addons-659513 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-659513        │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │ 21 Dec 25 19:49 UTC │
	│ addons  │ addons-659513 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-659513        │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │ 21 Dec 25 19:48 UTC │
	│ addons  │ addons-659513 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-659513        │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ 21 Dec 25 19:49 UTC │
	│ addons  │ addons-659513 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-659513        │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ 21 Dec 25 19:49 UTC │
	│ ip      │ addons-659513 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-659513        │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ 21 Dec 25 19:49 UTC │
	│ addons  │ addons-659513 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-659513        │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ 21 Dec 25 19:49 UTC │
	│ ssh     │ addons-659513 ssh cat /opt/local-path-provisioner/pvc-7cf3985a-8a2e-4729-b39d-80336e9e7676_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-659513        │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ 21 Dec 25 19:49 UTC │
	│ addons  │ addons-659513 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-659513        │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ 21 Dec 25 19:49 UTC │
	│ addons  │ addons-659513 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-659513        │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ 21 Dec 25 19:49 UTC │
	│ ssh     │ addons-659513 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-659513        │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-659513                                                                                                                                                                                                                                                                                                                                                                                         │ addons-659513        │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ 21 Dec 25 19:49 UTC │
	│ addons  │ addons-659513 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-659513        │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ 21 Dec 25 19:49 UTC │
	│ addons  │ addons-659513 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-659513        │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ 21 Dec 25 19:49 UTC │
	│ addons  │ addons-659513 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-659513        │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ 21 Dec 25 19:49 UTC │
	│ addons  │ addons-659513 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-659513        │ jenkins │ v1.37.0 │ 21 Dec 25 19:49 UTC │ 21 Dec 25 19:49 UTC │
	│ ip      │ addons-659513 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-659513        │ jenkins │ v1.37.0 │ 21 Dec 25 19:51 UTC │ 21 Dec 25 19:51 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 19:46:26
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 19:46:26.793172  127170 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:46:26.793463  127170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:46:26.793474  127170 out.go:374] Setting ErrFile to fd 2...
	I1221 19:46:26.793483  127170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:46:26.793680  127170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
	I1221 19:46:26.794185  127170 out.go:368] Setting JSON to false
	I1221 19:46:26.795005  127170 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":12537,"bootTime":1766333850,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 19:46:26.795063  127170 start.go:143] virtualization: kvm guest
	I1221 19:46:26.797090  127170 out.go:179] * [addons-659513] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 19:46:26.798220  127170 notify.go:221] Checking for updates...
	I1221 19:46:26.798230  127170 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 19:46:26.799686  127170 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 19:46:26.801148  127170 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	I1221 19:46:26.802447  127170 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	I1221 19:46:26.803877  127170 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 19:46:26.805107  127170 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 19:46:26.806426  127170 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 19:46:26.836097  127170 out.go:179] * Using the kvm2 driver based on user configuration
	I1221 19:46:26.837263  127170 start.go:309] selected driver: kvm2
	I1221 19:46:26.837290  127170 start.go:928] validating driver "kvm2" against <nil>
	I1221 19:46:26.837311  127170 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 19:46:26.838320  127170 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1221 19:46:26.838668  127170 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 19:46:26.838708  127170 cni.go:84] Creating CNI manager for ""
	I1221 19:46:26.838763  127170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1221 19:46:26.838775  127170 start_flags.go:338] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1221 19:46:26.838827  127170 start.go:353] cluster config:
	{Name:addons-659513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-659513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1221 19:46:26.838951  127170 iso.go:125] acquiring lock: {Name:mk32aed4917b82431a8f5160a35db6118385a2c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 19:46:26.841173  127170 out.go:179] * Starting "addons-659513" primary control-plane node in "addons-659513" cluster
	I1221 19:46:26.842551  127170 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 19:46:26.842591  127170 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-122429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1221 19:46:26.842604  127170 cache.go:65] Caching tarball of preloaded images
	I1221 19:46:26.842674  127170 preload.go:251] Found /home/jenkins/minikube-integration/22179-122429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 19:46:26.842684  127170 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1221 19:46:26.843014  127170 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/config.json ...
	I1221 19:46:26.843040  127170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/config.json: {Name:mk4cab2001293abff638904bb7d40fa859a87d0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:26.843204  127170 start.go:360] acquireMachinesLock for addons-659513: {Name:mkd449b545e9165e82ce02652c0c22eb5894063b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1221 19:46:26.843292  127170 start.go:364] duration metric: took 51.662µs to acquireMachinesLock for "addons-659513"
	I1221 19:46:26.843318  127170 start.go:93] Provisioning new machine with config: &{Name:addons-659513 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:addons-659513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 19:46:26.843371  127170 start.go:125] createHost starting for "" (driver="kvm2")
	I1221 19:46:26.844998  127170 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1221 19:46:26.845166  127170 start.go:159] libmachine.API.Create for "addons-659513" (driver="kvm2")
	I1221 19:46:26.845195  127170 client.go:173] LocalClient.Create starting
	I1221 19:46:26.845325  127170 main.go:144] libmachine: Creating CA: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem
	I1221 19:46:26.864968  127170 main.go:144] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/cert.pem
	I1221 19:46:26.904168  127170 main.go:144] libmachine: creating domain...
	I1221 19:46:26.904189  127170 main.go:144] libmachine: creating network...
	I1221 19:46:26.905574  127170 main.go:144] libmachine: found existing default network
	I1221 19:46:26.905845  127170 main.go:144] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1221 19:46:26.906347  127170 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c5c7f0}
	I1221 19:46:26.906475  127170 main.go:144] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-659513</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1221 19:46:26.912248  127170 main.go:144] libmachine: creating private network mk-addons-659513 192.168.39.0/24...
	I1221 19:46:26.981031  127170 main.go:144] libmachine: private network mk-addons-659513 192.168.39.0/24 created
	I1221 19:46:26.981316  127170 main.go:144] libmachine: <network>
	  <name>mk-addons-659513</name>
	  <uuid>60972456-5ec1-4ea6-b8f1-c69c8ff211b5</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:a0:9a:4e'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1221 19:46:26.981352  127170 main.go:144] libmachine: setting up store path in /home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513 ...
	I1221 19:46:26.981383  127170 main.go:144] libmachine: building disk image from file:///home/jenkins/minikube-integration/22179-122429/.minikube/cache/iso/amd64/minikube-v1.37.0-1766254259-22261-amd64.iso
	I1221 19:46:26.981396  127170 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22179-122429/.minikube
	I1221 19:46:26.981515  127170 main.go:144] libmachine: Downloading /home/jenkins/minikube-integration/22179-122429/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22179-122429/.minikube/cache/iso/amd64/minikube-v1.37.0-1766254259-22261-amd64.iso...
	I1221 19:46:27.232993  127170 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa...
	I1221 19:46:27.287717  127170 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/addons-659513.rawdisk...
	I1221 19:46:27.287759  127170 main.go:144] libmachine: Writing magic tar header
	I1221 19:46:27.287820  127170 main.go:144] libmachine: Writing SSH key tar header
	I1221 19:46:27.287910  127170 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513 ...
	I1221 19:46:27.287977  127170 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513
	I1221 19:46:27.288000  127170 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513 (perms=drwx------)
	I1221 19:46:27.288011  127170 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22179-122429/.minikube/machines
	I1221 19:46:27.288021  127170 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22179-122429/.minikube/machines (perms=drwxr-xr-x)
	I1221 19:46:27.288031  127170 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22179-122429/.minikube
	I1221 19:46:27.288043  127170 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22179-122429/.minikube (perms=drwxr-xr-x)
	I1221 19:46:27.288051  127170 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22179-122429
	I1221 19:46:27.288059  127170 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22179-122429 (perms=drwxrwxr-x)
	I1221 19:46:27.288071  127170 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1221 19:46:27.288079  127170 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1221 19:46:27.288091  127170 main.go:144] libmachine: checking permissions on dir: /home/jenkins
	I1221 19:46:27.288098  127170 main.go:144] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1221 19:46:27.288108  127170 main.go:144] libmachine: checking permissions on dir: /home
	I1221 19:46:27.288115  127170 main.go:144] libmachine: skipping /home - not owner
	I1221 19:46:27.288122  127170 main.go:144] libmachine: defining domain...
	I1221 19:46:27.289527  127170 main.go:144] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-659513</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/addons-659513.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-659513'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1221 19:46:27.294901  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:ec:79:39 in network default
	I1221 19:46:27.295454  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:27.295475  127170 main.go:144] libmachine: starting domain...
	I1221 19:46:27.295480  127170 main.go:144] libmachine: ensuring networks are active...
	I1221 19:46:27.296173  127170 main.go:144] libmachine: Ensuring network default is active
	I1221 19:46:27.296561  127170 main.go:144] libmachine: Ensuring network mk-addons-659513 is active
	I1221 19:46:27.297119  127170 main.go:144] libmachine: getting domain XML...
	I1221 19:46:27.298188  127170 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-659513</name>
	  <uuid>536fbf62-98e1-4d4f-bd81-908693d32210</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/addons-659513.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:56:ba:4c'/>
	      <source network='mk-addons-659513'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:ec:79:39'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1221 19:46:28.562848  127170 main.go:144] libmachine: waiting for domain to start...
	I1221 19:46:28.564397  127170 main.go:144] libmachine: domain is now running
	I1221 19:46:28.564420  127170 main.go:144] libmachine: waiting for IP...
	I1221 19:46:28.565233  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:28.566189  127170 main.go:144] libmachine: no network interface addresses found for domain addons-659513 (source=lease)
	I1221 19:46:28.566211  127170 main.go:144] libmachine: trying to list again with source=arp
	I1221 19:46:28.566604  127170 main.go:144] libmachine: unable to find current IP address of domain addons-659513 in network mk-addons-659513 (interfaces detected: [])
	I1221 19:46:28.566665  127170 retry.go:84] will retry after 300ms: waiting for domain to come up
	I1221 19:46:28.825289  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:28.826170  127170 main.go:144] libmachine: no network interface addresses found for domain addons-659513 (source=lease)
	I1221 19:46:28.826192  127170 main.go:144] libmachine: trying to list again with source=arp
	I1221 19:46:28.826572  127170 main.go:144] libmachine: unable to find current IP address of domain addons-659513 in network mk-addons-659513 (interfaces detected: [])
	I1221 19:46:29.132113  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:29.132909  127170 main.go:144] libmachine: no network interface addresses found for domain addons-659513 (source=lease)
	I1221 19:46:29.132924  127170 main.go:144] libmachine: trying to list again with source=arp
	I1221 19:46:29.133254  127170 main.go:144] libmachine: unable to find current IP address of domain addons-659513 in network mk-addons-659513 (interfaces detected: [])
	I1221 19:46:29.440831  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:29.441664  127170 main.go:144] libmachine: no network interface addresses found for domain addons-659513 (source=lease)
	I1221 19:46:29.441679  127170 main.go:144] libmachine: trying to list again with source=arp
	I1221 19:46:29.442013  127170 main.go:144] libmachine: unable to find current IP address of domain addons-659513 in network mk-addons-659513 (interfaces detected: [])
	I1221 19:46:30.049064  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:30.049962  127170 main.go:144] libmachine: no network interface addresses found for domain addons-659513 (source=lease)
	I1221 19:46:30.049986  127170 main.go:144] libmachine: trying to list again with source=arp
	I1221 19:46:30.050303  127170 main.go:144] libmachine: unable to find current IP address of domain addons-659513 in network mk-addons-659513 (interfaces detected: [])
	I1221 19:46:30.771349  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:30.772040  127170 main.go:144] libmachine: no network interface addresses found for domain addons-659513 (source=lease)
	I1221 19:46:30.772057  127170 main.go:144] libmachine: trying to list again with source=arp
	I1221 19:46:30.772346  127170 main.go:144] libmachine: unable to find current IP address of domain addons-659513 in network mk-addons-659513 (interfaces detected: [])
	I1221 19:46:31.380188  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:31.380858  127170 main.go:144] libmachine: no network interface addresses found for domain addons-659513 (source=lease)
	I1221 19:46:31.380876  127170 main.go:144] libmachine: trying to list again with source=arp
	I1221 19:46:31.381158  127170 main.go:144] libmachine: unable to find current IP address of domain addons-659513 in network mk-addons-659513 (interfaces detected: [])
	I1221 19:46:32.569722  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:32.570419  127170 main.go:144] libmachine: no network interface addresses found for domain addons-659513 (source=lease)
	I1221 19:46:32.570436  127170 main.go:144] libmachine: trying to list again with source=arp
	I1221 19:46:32.570784  127170 main.go:144] libmachine: unable to find current IP address of domain addons-659513 in network mk-addons-659513 (interfaces detected: [])
	I1221 19:46:33.752332  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:33.753007  127170 main.go:144] libmachine: no network interface addresses found for domain addons-659513 (source=lease)
	I1221 19:46:33.753030  127170 main.go:144] libmachine: trying to list again with source=arp
	I1221 19:46:33.753325  127170 main.go:144] libmachine: unable to find current IP address of domain addons-659513 in network mk-addons-659513 (interfaces detected: [])
	I1221 19:46:35.396839  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:35.397531  127170 main.go:144] libmachine: no network interface addresses found for domain addons-659513 (source=lease)
	I1221 19:46:35.397553  127170 main.go:144] libmachine: trying to list again with source=arp
	I1221 19:46:35.397906  127170 main.go:144] libmachine: unable to find current IP address of domain addons-659513 in network mk-addons-659513 (interfaces detected: [])
	I1221 19:46:37.263196  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:37.263996  127170 main.go:144] libmachine: no network interface addresses found for domain addons-659513 (source=lease)
	I1221 19:46:37.264018  127170 main.go:144] libmachine: trying to list again with source=arp
	I1221 19:46:37.264386  127170 main.go:144] libmachine: unable to find current IP address of domain addons-659513 in network mk-addons-659513 (interfaces detected: [])
	I1221 19:46:39.808909  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:39.809768  127170 main.go:144] libmachine: no network interface addresses found for domain addons-659513 (source=lease)
	I1221 19:46:39.809793  127170 main.go:144] libmachine: trying to list again with source=arp
	I1221 19:46:39.810144  127170 main.go:144] libmachine: unable to find current IP address of domain addons-659513 in network mk-addons-659513 (interfaces detected: [])
	I1221 19:46:43.197153  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:43.197994  127170 main.go:144] libmachine: domain addons-659513 has current primary IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:43.198007  127170 main.go:144] libmachine: found domain IP: 192.168.39.164
	I1221 19:46:43.198015  127170 main.go:144] libmachine: reserving static IP address...
	I1221 19:46:43.198446  127170 main.go:144] libmachine: unable to find host DHCP lease matching {name: "addons-659513", mac: "52:54:00:56:ba:4c", ip: "192.168.39.164"} in network mk-addons-659513
	I1221 19:46:43.487871  127170 main.go:144] libmachine: reserved static IP address 192.168.39.164 for domain addons-659513
	I1221 19:46:43.487901  127170 main.go:144] libmachine: waiting for SSH...
	I1221 19:46:43.487926  127170 main.go:144] libmachine: Getting to WaitForSSH function...
	I1221 19:46:43.491308  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:43.491976  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:minikube Clientid:01:52:54:00:56:ba:4c}
	I1221 19:46:43.492014  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:43.492250  127170 main.go:144] libmachine: Using SSH client type: native
	I1221 19:46:43.492525  127170 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1221 19:46:43.492540  127170 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1221 19:46:43.600096  127170 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 19:46:43.600448  127170 main.go:144] libmachine: domain creation complete
	I1221 19:46:43.601951  127170 machine.go:94] provisionDockerMachine start ...
	I1221 19:46:43.604465  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:43.604895  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:minikube Clientid:01:52:54:00:56:ba:4c}
	I1221 19:46:43.604918  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:43.605051  127170 main.go:144] libmachine: Using SSH client type: native
	I1221 19:46:43.605252  127170 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1221 19:46:43.605261  127170 main.go:144] libmachine: About to run SSH command:
	hostname
	I1221 19:46:43.713386  127170 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1221 19:46:43.713416  127170 buildroot.go:166] provisioning hostname "addons-659513"
	I1221 19:46:43.716267  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:43.716719  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:46:43.716740  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:43.716904  127170 main.go:144] libmachine: Using SSH client type: native
	I1221 19:46:43.717154  127170 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1221 19:46:43.717167  127170 main.go:144] libmachine: About to run SSH command:
	sudo hostname addons-659513 && echo "addons-659513" | sudo tee /etc/hostname
	I1221 19:46:43.842661  127170 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-659513
	
	I1221 19:46:43.845869  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:43.846339  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:46:43.846375  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:43.846591  127170 main.go:144] libmachine: Using SSH client type: native
	I1221 19:46:43.846874  127170 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1221 19:46:43.846901  127170 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-659513' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-659513/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-659513' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 19:46:43.967319  127170 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 19:46:43.967373  127170 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22179-122429/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-122429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-122429/.minikube}
	I1221 19:46:43.967392  127170 buildroot.go:174] setting up certificates
	I1221 19:46:43.967405  127170 provision.go:84] configureAuth start
	I1221 19:46:43.970186  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:43.970673  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:46:43.970700  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:43.973061  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:43.973397  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:46:43.973419  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:43.973567  127170 provision.go:143] copyHostCerts
	I1221 19:46:43.973655  127170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-122429/.minikube/cert.pem (1123 bytes)
	I1221 19:46:43.973767  127170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-122429/.minikube/key.pem (1679 bytes)
	I1221 19:46:43.973880  127170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-122429/.minikube/ca.pem (1082 bytes)
	I1221 19:46:43.973948  127170 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-122429/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca-key.pem org=jenkins.addons-659513 san=[127.0.0.1 192.168.39.164 addons-659513 localhost minikube]
	I1221 19:46:44.165150  127170 provision.go:177] copyRemoteCerts
	I1221 19:46:44.165219  127170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 19:46:44.167681  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:44.168037  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:46:44.168063  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:44.168177  127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
	I1221 19:46:44.253956  127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1221 19:46:44.286179  127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1221 19:46:44.315895  127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1221 19:46:44.344867  127170 provision.go:87] duration metric: took 377.423662ms to configureAuth
	I1221 19:46:44.344905  127170 buildroot.go:189] setting minikube options for container-runtime
	I1221 19:46:44.345090  127170 config.go:182] Loaded profile config "addons-659513": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:46:44.348307  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:44.348787  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:46:44.348820  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:44.349069  127170 main.go:144] libmachine: Using SSH client type: native
	I1221 19:46:44.349343  127170 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1221 19:46:44.349364  127170 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 19:46:44.594585  127170 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 19:46:44.594621  127170 machine.go:97] duration metric: took 992.652445ms to provisionDockerMachine
	I1221 19:46:44.594638  127170 client.go:176] duration metric: took 17.74943249s to LocalClient.Create
	I1221 19:46:44.594668  127170 start.go:167] duration metric: took 17.749500937s to libmachine.API.Create "addons-659513"
	I1221 19:46:44.594680  127170 start.go:293] postStartSetup for "addons-659513" (driver="kvm2")
	I1221 19:46:44.594694  127170 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 19:46:44.594800  127170 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 19:46:44.597899  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:44.598469  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:46:44.598507  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:44.598679  127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
	I1221 19:46:44.683977  127170 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 19:46:44.689116  127170 info.go:137] Remote host: Buildroot 2025.02
	I1221 19:46:44.689145  127170 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-122429/.minikube/addons for local assets ...
	I1221 19:46:44.689208  127170 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-122429/.minikube/files for local assets ...
	I1221 19:46:44.689231  127170 start.go:296] duration metric: took 94.544681ms for postStartSetup
	I1221 19:46:44.700171  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:44.701649  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:46:44.701693  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:44.702013  127170 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/config.json ...
	I1221 19:46:44.702241  127170 start.go:128] duration metric: took 17.858850337s to createHost
	I1221 19:46:44.704326  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:44.704699  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:46:44.704719  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:44.704853  127170 main.go:144] libmachine: Using SSH client type: native
	I1221 19:46:44.705034  127170 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1221 19:46:44.705043  127170 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1221 19:46:44.813375  127170 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766346404.776216256
	
	I1221 19:46:44.813407  127170 fix.go:216] guest clock: 1766346404.776216256
	I1221 19:46:44.813415  127170 fix.go:229] Guest: 2025-12-21 19:46:44.776216256 +0000 UTC Remote: 2025-12-21 19:46:44.702254752 +0000 UTC m=+17.956660930 (delta=73.961504ms)
	I1221 19:46:44.813433  127170 fix.go:200] guest clock delta is within tolerance: 73.961504ms
	I1221 19:46:44.813438  127170 start.go:83] releasing machines lock for "addons-659513", held for 17.970133873s
	I1221 19:46:44.816517  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:44.816935  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:46:44.816962  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:44.817514  127170 ssh_runner.go:195] Run: cat /version.json
	I1221 19:46:44.817571  127170 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 19:46:44.820682  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:44.820916  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:44.821134  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:46:44.821170  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:44.821356  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:46:44.821354  127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
	I1221 19:46:44.821387  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:44.821612  127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
	I1221 19:46:44.919283  127170 ssh_runner.go:195] Run: systemctl --version
	I1221 19:46:44.925829  127170 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 19:46:45.352278  127170 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1221 19:46:45.359651  127170 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1221 19:46:45.359745  127170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 19:46:45.379127  127170 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1221 19:46:45.379155  127170 start.go:496] detecting cgroup driver to use...
	I1221 19:46:45.379218  127170 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 19:46:45.399107  127170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 19:46:45.417418  127170 docker.go:218] disabling cri-docker service (if available) ...
	I1221 19:46:45.417583  127170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 19:46:45.435868  127170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 19:46:45.452880  127170 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 19:46:45.616657  127170 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 19:46:45.845431  127170 docker.go:234] disabling docker service ...
	I1221 19:46:45.845566  127170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 19:46:45.863463  127170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 19:46:45.882585  127170 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 19:46:46.048711  127170 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 19:46:46.191469  127170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 19:46:46.208719  127170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 19:46:46.232278  127170 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1221 19:46:46.232345  127170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 19:46:46.244533  127170 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1221 19:46:46.244622  127170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 19:46:46.256733  127170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 19:46:46.268577  127170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 19:46:46.280322  127170 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 19:46:46.294060  127170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 19:46:46.306840  127170 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 19:46:46.327456  127170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 19:46:46.339741  127170 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 19:46:46.350403  127170 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1221 19:46:46.350512  127170 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1221 19:46:46.370953  127170 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 19:46:46.383590  127170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 19:46:46.532727  127170 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 19:46:46.738903  127170 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 19:46:46.739020  127170 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 19:46:46.744514  127170 start.go:564] Will wait 60s for crictl version
	I1221 19:46:46.744594  127170 ssh_runner.go:195] Run: which crictl
	I1221 19:46:46.748790  127170 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1221 19:46:46.785541  127170 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1221 19:46:46.785666  127170 ssh_runner.go:195] Run: crio --version
	I1221 19:46:46.820669  127170 ssh_runner.go:195] Run: crio --version
	I1221 19:46:46.906406  127170 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1221 19:46:46.915176  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:46.915594  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:46:46.915623  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:46:46.915833  127170 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1221 19:46:46.921213  127170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 19:46:46.937559  127170 kubeadm.go:884] updating cluster {Name:addons-659513 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
3 ClusterName:addons-659513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1221 19:46:46.937710  127170 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 19:46:46.937777  127170 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 19:46:46.975983  127170 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1221 19:46:46.976068  127170 ssh_runner.go:195] Run: which lz4
	I1221 19:46:46.980804  127170 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1221 19:46:46.985796  127170 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1221 19:46:46.985844  127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340314847 bytes)
	I1221 19:46:48.203533  127170 crio.go:462] duration metric: took 1.222757681s to copy over tarball
	I1221 19:46:48.203618  127170 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1221 19:46:49.669329  127170 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.465669427s)
	I1221 19:46:49.669364  127170 crio.go:469] duration metric: took 1.46579883s to extract the tarball
	I1221 19:46:49.669375  127170 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1221 19:46:49.706100  127170 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 19:46:49.755759  127170 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 19:46:49.755785  127170 cache_images.go:86] Images are preloaded, skipping loading
	I1221 19:46:49.755795  127170 kubeadm.go:935] updating node { 192.168.39.164 8443 v1.34.3 crio true true} ...
	I1221 19:46:49.755938  127170 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-659513 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-659513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1221 19:46:49.756025  127170 ssh_runner.go:195] Run: crio config
	I1221 19:46:49.800898  127170 cni.go:84] Creating CNI manager for ""
	I1221 19:46:49.800923  127170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1221 19:46:49.800945  127170 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1221 19:46:49.800967  127170 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.164 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-659513 NodeName:addons-659513 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 19:46:49.801085  127170 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-659513"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.164"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.164"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 19:46:49.801147  127170 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1221 19:46:49.813256  127170 binaries.go:51] Found k8s binaries, skipping transfer
	I1221 19:46:49.813368  127170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 19:46:49.825090  127170 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1221 19:46:49.845153  127170 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1221 19:46:49.864927  127170 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1221 19:46:49.885107  127170 ssh_runner.go:195] Run: grep 192.168.39.164	control-plane.minikube.internal$ /etc/hosts
	I1221 19:46:49.889281  127170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 19:46:49.903809  127170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 19:46:50.042222  127170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 19:46:50.075783  127170 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513 for IP: 192.168.39.164
	I1221 19:46:50.075807  127170 certs.go:195] generating shared ca certs ...
	I1221 19:46:50.075823  127170 certs.go:227] acquiring lock for ca certs: {Name:mkda19a66cdf101dd9d66a3219f3492b9fb00ea9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:50.075965  127170 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-122429/.minikube/ca.key
	I1221 19:46:50.181556  127170 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-122429/.minikube/ca.crt ...
	I1221 19:46:50.181591  127170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/.minikube/ca.crt: {Name:mk2b5cc8837700d02edda3aea25effa33f4607cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:50.181770  127170 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-122429/.minikube/ca.key ...
	I1221 19:46:50.181781  127170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/.minikube/ca.key: {Name:mk4e031103f29442df42078ad479c1dddebebca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:50.181860  127170 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-122429/.minikube/proxy-client-ca.key
	I1221 19:46:50.217804  127170 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-122429/.minikube/proxy-client-ca.crt ...
	I1221 19:46:50.217834  127170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/.minikube/proxy-client-ca.crt: {Name:mk45376a283e1faa28fc0c4e184c4fc9d95a74a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:50.218000  127170 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-122429/.minikube/proxy-client-ca.key ...
	I1221 19:46:50.218012  127170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/.minikube/proxy-client-ca.key: {Name:mk47096f7d96737d2b148e108e99e4246fde4cf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:50.218086  127170 certs.go:257] generating profile certs ...
	I1221 19:46:50.218143  127170 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.key
	I1221 19:46:50.218154  127170 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt with IP's: []
	I1221 19:46:50.251348  127170 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt ...
	I1221 19:46:50.251376  127170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: {Name:mkb06e42755b88e2b2958dafd8bf92399d2404c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:50.251527  127170 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.key ...
	I1221 19:46:50.251542  127170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.key: {Name:mka38165d0c3f24db3945be76ba9af293cd5085c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:50.251620  127170 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/apiserver.key.2a85b83d
	I1221 19:46:50.251640  127170 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/apiserver.crt.2a85b83d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.164]
	I1221 19:46:50.350045  127170 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/apiserver.crt.2a85b83d ...
	I1221 19:46:50.350082  127170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/apiserver.crt.2a85b83d: {Name:mk4bcba66f72c95a0f4d5cbb28ab113907a605c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:50.350282  127170 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/apiserver.key.2a85b83d ...
	I1221 19:46:50.350301  127170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/apiserver.key.2a85b83d: {Name:mk41a18cda5bca449a80cf2a89fa2251133f71d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:50.350393  127170 certs.go:382] copying /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/apiserver.crt.2a85b83d -> /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/apiserver.crt
	I1221 19:46:50.350481  127170 certs.go:386] copying /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/apiserver.key.2a85b83d -> /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/apiserver.key
	I1221 19:46:50.350556  127170 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/proxy-client.key
	I1221 19:46:50.350582  127170 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/proxy-client.crt with IP's: []
	I1221 19:46:50.447003  127170 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/proxy-client.crt ...
	I1221 19:46:50.447035  127170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/proxy-client.crt: {Name:mk24f8bec2a16680dca4d7845a13d5a21324eaa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:50.447227  127170 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/proxy-client.key ...
	I1221 19:46:50.447253  127170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/proxy-client.key: {Name:mk3427ee7161a4c5f2da22ea973d1cc86c00d395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:50.447511  127170 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 19:46:50.447560  127170 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem (1082 bytes)
	I1221 19:46:50.447601  127170 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/cert.pem (1123 bytes)
	I1221 19:46:50.447633  127170 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/key.pem (1679 bytes)
	I1221 19:46:50.448280  127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 19:46:50.479658  127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1221 19:46:50.508097  127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 19:46:50.536972  127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1221 19:46:50.564984  127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1221 19:46:50.593602  127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1221 19:46:50.622142  127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 19:46:50.650372  127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1221 19:46:50.679003  127170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 19:46:50.707647  127170 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1221 19:46:50.728048  127170 ssh_runner.go:195] Run: openssl version
	I1221 19:46:50.734462  127170 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1221 19:46:50.748154  127170 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1221 19:46:50.760825  127170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 19:46:50.766526  127170 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I1221 19:46:50.766588  127170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 19:46:50.776068  127170 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1221 19:46:50.788150  127170 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1221 19:46:50.800669  127170 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1221 19:46:50.806135  127170 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1221 19:46:50.806191  127170 kubeadm.go:401] StartCluster: {Name:addons-659513 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 C
lusterName:addons-659513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 19:46:50.806275  127170 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 19:46:50.806345  127170 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 19:46:50.840534  127170 cri.go:96] found id: ""
	I1221 19:46:50.840615  127170 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 19:46:50.853027  127170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1221 19:46:50.864501  127170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1221 19:46:50.875558  127170 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1221 19:46:50.875575  127170 kubeadm.go:158] found existing configuration files:
	
	I1221 19:46:50.875615  127170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1221 19:46:50.885556  127170 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1221 19:46:50.885621  127170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1221 19:46:50.896335  127170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1221 19:46:50.906565  127170 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1221 19:46:50.906631  127170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1221 19:46:50.917944  127170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1221 19:46:50.928089  127170 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1221 19:46:50.928158  127170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1221 19:46:50.939811  127170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1221 19:46:50.950719  127170 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1221 19:46:50.950785  127170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1221 19:46:50.962221  127170 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1221 19:46:51.118864  127170 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1221 19:47:02.465041  127170 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1221 19:47:02.465130  127170 kubeadm.go:319] [preflight] Running pre-flight checks
	I1221 19:47:02.465234  127170 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1221 19:47:02.465350  127170 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1221 19:47:02.465447  127170 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1221 19:47:02.465516  127170 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1221 19:47:02.466977  127170 out.go:252]   - Generating certificates and keys ...
	I1221 19:47:02.467036  127170 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1221 19:47:02.467089  127170 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1221 19:47:02.467166  127170 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1221 19:47:02.467249  127170 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1221 19:47:02.467353  127170 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1221 19:47:02.467431  127170 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1221 19:47:02.467528  127170 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1221 19:47:02.467713  127170 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-659513 localhost] and IPs [192.168.39.164 127.0.0.1 ::1]
	I1221 19:47:02.467794  127170 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1221 19:47:02.467979  127170 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-659513 localhost] and IPs [192.168.39.164 127.0.0.1 ::1]
	I1221 19:47:02.468074  127170 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1221 19:47:02.468171  127170 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1221 19:47:02.468241  127170 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1221 19:47:02.468332  127170 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1221 19:47:02.468381  127170 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1221 19:47:02.468468  127170 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1221 19:47:02.468551  127170 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1221 19:47:02.468622  127170 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1221 19:47:02.468703  127170 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1221 19:47:02.468775  127170 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1221 19:47:02.468869  127170 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1221 19:47:02.470451  127170 out.go:252]   - Booting up control plane ...
	I1221 19:47:02.470571  127170 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1221 19:47:02.470675  127170 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1221 19:47:02.470807  127170 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1221 19:47:02.470999  127170 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1221 19:47:02.471157  127170 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1221 19:47:02.471301  127170 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1221 19:47:02.471377  127170 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1221 19:47:02.471410  127170 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1221 19:47:02.471532  127170 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1221 19:47:02.471652  127170 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1221 19:47:02.471741  127170 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001667437s
	I1221 19:47:02.471860  127170 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1221 19:47:02.471969  127170 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.164:8443/livez
	I1221 19:47:02.472090  127170 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1221 19:47:02.472192  127170 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1221 19:47:02.472305  127170 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.375292802s
	I1221 19:47:02.472365  127170 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.782821207s
	I1221 19:47:02.472423  127170 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.50240184s
	I1221 19:47:02.472536  127170 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1221 19:47:02.472683  127170 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1221 19:47:02.472759  127170 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1221 19:47:02.473010  127170 kubeadm.go:319] [mark-control-plane] Marking the node addons-659513 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1221 19:47:02.473087  127170 kubeadm.go:319] [bootstrap-token] Using token: opiai1.qnvll8epf3ex3bpn
	I1221 19:47:02.475272  127170 out.go:252]   - Configuring RBAC rules ...
	I1221 19:47:02.475361  127170 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1221 19:47:02.475441  127170 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1221 19:47:02.475600  127170 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1221 19:47:02.475727  127170 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1221 19:47:02.475824  127170 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1221 19:47:02.475892  127170 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1221 19:47:02.475982  127170 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1221 19:47:02.476047  127170 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1221 19:47:02.476126  127170 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1221 19:47:02.476135  127170 kubeadm.go:319] 
	I1221 19:47:02.476229  127170 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1221 19:47:02.476237  127170 kubeadm.go:319] 
	I1221 19:47:02.476342  127170 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1221 19:47:02.476355  127170 kubeadm.go:319] 
	I1221 19:47:02.476391  127170 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1221 19:47:02.476478  127170 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1221 19:47:02.476562  127170 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1221 19:47:02.476578  127170 kubeadm.go:319] 
	I1221 19:47:02.476622  127170 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1221 19:47:02.476628  127170 kubeadm.go:319] 
	I1221 19:47:02.476677  127170 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1221 19:47:02.476686  127170 kubeadm.go:319] 
	I1221 19:47:02.476757  127170 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1221 19:47:02.476866  127170 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1221 19:47:02.476961  127170 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1221 19:47:02.476971  127170 kubeadm.go:319] 
	I1221 19:47:02.477076  127170 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1221 19:47:02.477180  127170 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1221 19:47:02.477188  127170 kubeadm.go:319] 
	I1221 19:47:02.477301  127170 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token opiai1.qnvll8epf3ex3bpn \
	I1221 19:47:02.477433  127170 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4f35461b95b227e9d1829c929bb399222e80c78f00e691e8dfd0f482c558d3d6 \
	I1221 19:47:02.477462  127170 kubeadm.go:319] 	--control-plane 
	I1221 19:47:02.477465  127170 kubeadm.go:319] 
	I1221 19:47:02.477563  127170 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1221 19:47:02.477570  127170 kubeadm.go:319] 
	I1221 19:47:02.477660  127170 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token opiai1.qnvll8epf3ex3bpn \
	I1221 19:47:02.477802  127170 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4f35461b95b227e9d1829c929bb399222e80c78f00e691e8dfd0f482c558d3d6 
	I1221 19:47:02.477816  127170 cni.go:84] Creating CNI manager for ""
	I1221 19:47:02.477825  127170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1221 19:47:02.479208  127170 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1221 19:47:02.480417  127170 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1221 19:47:02.496005  127170 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1221 19:47:02.525861  127170 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1221 19:47:02.525975  127170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:02.525987  127170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-659513 minikube.k8s.io/updated_at=2025_12_21T19_47_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c minikube.k8s.io/name=addons-659513 minikube.k8s.io/primary=true
	I1221 19:47:02.557089  127170 ops.go:34] apiserver oom_adj: -16
	I1221 19:47:02.676740  127170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:03.176831  127170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:03.676850  127170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:04.177421  127170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:04.677008  127170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:05.177269  127170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:05.677211  127170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:06.177219  127170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:06.677812  127170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:07.177342  127170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:07.287157  127170 kubeadm.go:1114] duration metric: took 4.76123301s to wait for elevateKubeSystemPrivileges
	I1221 19:47:07.287211  127170 kubeadm.go:403] duration metric: took 16.481024379s to StartCluster
	I1221 19:47:07.287247  127170 settings.go:142] acquiring lock: {Name:mk8bc901164ee13eb5278832ae429ca9408ea551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:47:07.287390  127170 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-122429/kubeconfig
	I1221 19:47:07.287772  127170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/kubeconfig: {Name:mke0d928f8059efde48d6d18bc9cf0e4672401c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:47:07.287989  127170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1221 19:47:07.288010  127170 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 19:47:07.288075  127170 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1221 19:47:07.288188  127170 addons.go:70] Setting default-storageclass=true in profile "addons-659513"
	I1221 19:47:07.288208  127170 addons.go:70] Setting yakd=true in profile "addons-659513"
	I1221 19:47:07.288225  127170 addons.go:239] Setting addon yakd=true in "addons-659513"
	I1221 19:47:07.288223  127170 config.go:182] Loaded profile config "addons-659513": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:47:07.288233  127170 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-659513"
	I1221 19:47:07.288212  127170 addons.go:70] Setting cloud-spanner=true in profile "addons-659513"
	I1221 19:47:07.288256  127170 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-659513"
	I1221 19:47:07.288270  127170 addons.go:70] Setting registry=true in profile "addons-659513"
	I1221 19:47:07.288224  127170 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-659513"
	I1221 19:47:07.288218  127170 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-659513"
	I1221 19:47:07.288389  127170 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-659513"
	I1221 19:47:07.288257  127170 addons.go:239] Setting addon cloud-spanner=true in "addons-659513"
	I1221 19:47:07.288516  127170 host.go:66] Checking if "addons-659513" exists ...
	I1221 19:47:07.288552  127170 host.go:66] Checking if "addons-659513" exists ...
	I1221 19:47:07.288265  127170 addons.go:70] Setting storage-provisioner=true in profile "addons-659513"
	I1221 19:47:07.288646  127170 addons.go:239] Setting addon storage-provisioner=true in "addons-659513"
	I1221 19:47:07.288691  127170 host.go:66] Checking if "addons-659513" exists ...
	I1221 19:47:07.288275  127170 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-659513"
	I1221 19:47:07.288964  127170 host.go:66] Checking if "addons-659513" exists ...
	I1221 19:47:07.288278  127170 host.go:66] Checking if "addons-659513" exists ...
	I1221 19:47:07.288281  127170 addons.go:239] Setting addon registry=true in "addons-659513"
	I1221 19:47:07.289470  127170 host.go:66] Checking if "addons-659513" exists ...
	I1221 19:47:07.288281  127170 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-659513"
	I1221 19:47:07.289546  127170 host.go:66] Checking if "addons-659513" exists ...
	I1221 19:47:07.288282  127170 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-659513"
	I1221 19:47:07.289970  127170 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-659513"
	I1221 19:47:07.288282  127170 addons.go:70] Setting ingress-dns=true in profile "addons-659513"
	I1221 19:47:07.290284  127170 addons.go:239] Setting addon ingress-dns=true in "addons-659513"
	I1221 19:47:07.290321  127170 host.go:66] Checking if "addons-659513" exists ...
	I1221 19:47:07.288286  127170 addons.go:70] Setting inspektor-gadget=true in profile "addons-659513"
	I1221 19:47:07.290517  127170 addons.go:239] Setting addon inspektor-gadget=true in "addons-659513"
	I1221 19:47:07.290556  127170 host.go:66] Checking if "addons-659513" exists ...
	I1221 19:47:07.288287  127170 addons.go:70] Setting volcano=true in profile "addons-659513"
	I1221 19:47:07.290743  127170 addons.go:239] Setting addon volcano=true in "addons-659513"
	I1221 19:47:07.290779  127170 host.go:66] Checking if "addons-659513" exists ...
	I1221 19:47:07.288285  127170 addons.go:70] Setting registry-creds=true in profile "addons-659513"
	I1221 19:47:07.291387  127170 addons.go:239] Setting addon registry-creds=true in "addons-659513"
	I1221 19:47:07.291424  127170 host.go:66] Checking if "addons-659513" exists ...
	I1221 19:47:07.288287  127170 addons.go:70] Setting gcp-auth=true in profile "addons-659513"
	I1221 19:47:07.291660  127170 mustload.go:66] Loading cluster: addons-659513
	I1221 19:47:07.291869  127170 config.go:182] Loaded profile config "addons-659513": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:47:07.288290  127170 addons.go:70] Setting metrics-server=true in profile "addons-659513"
	I1221 19:47:07.291913  127170 addons.go:239] Setting addon metrics-server=true in "addons-659513"
	I1221 19:47:07.291952  127170 host.go:66] Checking if "addons-659513" exists ...
	I1221 19:47:07.288291  127170 addons.go:70] Setting volumesnapshots=true in profile "addons-659513"
	I1221 19:47:07.292277  127170 addons.go:239] Setting addon volumesnapshots=true in "addons-659513"
	I1221 19:47:07.292307  127170 host.go:66] Checking if "addons-659513" exists ...
	I1221 19:47:07.288293  127170 addons.go:70] Setting ingress=true in profile "addons-659513"
	I1221 19:47:07.292544  127170 addons.go:239] Setting addon ingress=true in "addons-659513"
	I1221 19:47:07.292582  127170 host.go:66] Checking if "addons-659513" exists ...
	I1221 19:47:07.292976  127170 out.go:179] * Verifying Kubernetes components...
	I1221 19:47:07.296731  127170 addons.go:239] Setting addon default-storageclass=true in "addons-659513"
	I1221 19:47:07.296768  127170 host.go:66] Checking if "addons-659513" exists ...
	I1221 19:47:07.296985  127170 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1221 19:47:07.296992  127170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 19:47:07.297059  127170 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.46
	I1221 19:47:07.297072  127170 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1221 19:47:07.297060  127170 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 19:47:07.298310  127170 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.6
	I1221 19:47:07.298367  127170 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1221 19:47:07.298713  127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1221 19:47:07.298409  127170 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1221 19:47:07.298945  127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1221 19:47:07.299060  127170 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1221 19:47:07.299093  127170 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 19:47:07.299409  127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1221 19:47:07.299126  127170 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-659513"
	I1221 19:47:07.299166  127170 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	W1221 19:47:07.299296  127170 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1221 19:47:07.299511  127170 host.go:66] Checking if "addons-659513" exists ...
	I1221 19:47:07.299723  127170 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1221 19:47:07.299756  127170 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1221 19:47:07.299777  127170 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1221 19:47:07.300167  127170 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1221 19:47:07.299843  127170 host.go:66] Checking if "addons-659513" exists ...
	I1221 19:47:07.300542  127170 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1221 19:47:07.300579  127170 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1221 19:47:07.300976  127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1221 19:47:07.301420  127170 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1221 19:47:07.301438  127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1221 19:47:07.302341  127170 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1221 19:47:07.302392  127170 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1221 19:47:07.302413  127170 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1221 19:47:07.302385  127170 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1221 19:47:07.302452  127170 out.go:179]   - Using image docker.io/registry:3.0.0
	I1221 19:47:07.302548  127170 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1221 19:47:07.303522  127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1221 19:47:07.302847  127170 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1221 19:47:07.303590  127170 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1221 19:47:07.303631  127170 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1221 19:47:07.303674  127170 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1221 19:47:07.303690  127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1221 19:47:07.303675  127170 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1221 19:47:07.304338  127170 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1221 19:47:07.304358  127170 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1221 19:47:07.304369  127170 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1221 19:47:07.304382  127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1221 19:47:07.304455  127170 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1221 19:47:07.305402  127170 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1221 19:47:07.305443  127170 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1221 19:47:07.307133  127170 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1221 19:47:07.308280  127170 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1221 19:47:07.308293  127170 out.go:179]   - Using image docker.io/busybox:stable
	I1221 19:47:07.309140  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.309459  127170 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1221 19:47:07.309608  127170 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1221 19:47:07.309629  127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1221 19:47:07.309617  127170 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1221 19:47:07.309679  127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1221 19:47:07.310667  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.311178  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:47:07.311214  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.311269  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.311654  127170 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1221 19:47:07.312272  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:47:07.312311  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.312259  127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
	I1221 19:47:07.312621  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.312641  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.312704  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.313133  127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
	I1221 19:47:07.313247  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:47:07.313289  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.314070  127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
	I1221 19:47:07.314188  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:47:07.314265  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:47:07.314291  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.314385  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.314399  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:47:07.314436  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.314597  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.314603  127170 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1221 19:47:07.314882  127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
	I1221 19:47:07.315008  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.315095  127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
	I1221 19:47:07.315106  127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
	I1221 19:47:07.316005  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:47:07.316043  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.316194  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:47:07.316245  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.316353  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.316408  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.316696  127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
	I1221 19:47:07.316902  127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
	I1221 19:47:07.317366  127170 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1221 19:47:07.317366  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:47:07.317424  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.317436  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.317455  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.317557  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:47:07.317596  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.317697  127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
	I1221 19:47:07.318015  127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
	I1221 19:47:07.318334  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:47:07.318376  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.318532  127170 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1221 19:47:07.318547  127170 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1221 19:47:07.318595  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:47:07.318631  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.319032  127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
	I1221 19:47:07.319044  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.319063  127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
	I1221 19:47:07.319443  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.319705  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:47:07.319741  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.319908  127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
	I1221 19:47:07.320097  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:47:07.320131  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.320321  127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
	I1221 19:47:07.322034  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.322407  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:47:07.322442  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:07.322632  127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
	W1221 19:47:07.451672  127170 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54306->192.168.39.164:22: read: connection reset by peer
	I1221 19:47:07.451715  127170 retry.go:84] will retry after 200ms: ssh: handshake failed: read tcp 192.168.39.1:54306->192.168.39.164:22: read: connection reset by peer
	W1221 19:47:07.461092  127170 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54314->192.168.39.164:22: read: connection reset by peer
	W1221 19:47:07.748560  127170 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54344->192.168.39.164:22: read: connection reset by peer
	I1221 19:47:08.029768  127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1221 19:47:08.215629  127170 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1221 19:47:08.215670  127170 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1221 19:47:08.247352  127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 19:47:08.304317  127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1221 19:47:08.362560  127170 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1221 19:47:08.362600  127170 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1221 19:47:08.400131  127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1221 19:47:08.411258  127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1221 19:47:08.416375  127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1221 19:47:08.422943  127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1221 19:47:08.433592  127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1221 19:47:08.445886  127170 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1221 19:47:08.445930  127170 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1221 19:47:08.458678  127170 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1221 19:47:08.458698  127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1221 19:47:08.466761  127170 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1221 19:47:08.466785  127170 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1221 19:47:08.552736  127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1221 19:47:08.778028  127170 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1221 19:47:08.778065  127170 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1221 19:47:08.842461  127170 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.554435042s)
	I1221 19:47:08.842557  127170 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.545534321s)
	I1221 19:47:08.842637  127170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 19:47:08.842712  127170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1221 19:47:09.067180  127170 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1221 19:47:09.067219  127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1221 19:47:09.100900  127170 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1221 19:47:09.100946  127170 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1221 19:47:09.102927  127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1221 19:47:09.115095  127170 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1221 19:47:09.115133  127170 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1221 19:47:09.212190  127170 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1221 19:47:09.212226  127170 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1221 19:47:09.358832  127170 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1221 19:47:09.358871  127170 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1221 19:47:09.501752  127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1221 19:47:09.519361  127170 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1221 19:47:09.519400  127170 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1221 19:47:09.525481  127170 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1221 19:47:09.525515  127170 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1221 19:47:09.576025  127170 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1221 19:47:09.576105  127170 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1221 19:47:09.688923  127170 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1221 19:47:09.688957  127170 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1221 19:47:09.802085  127170 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1221 19:47:09.802121  127170 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1221 19:47:09.911163  127170 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1221 19:47:09.911189  127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1221 19:47:09.956364  127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1221 19:47:09.980942  127170 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1221 19:47:09.980975  127170 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1221 19:47:10.160269  127170 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1221 19:47:10.160299  127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1221 19:47:10.254546  127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1221 19:47:10.321600  127170 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1221 19:47:10.321632  127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1221 19:47:10.482774  127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1221 19:47:10.618173  127170 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1221 19:47:10.618212  127170 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1221 19:47:10.948074  127170 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1221 19:47:10.948104  127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1221 19:47:11.445251  127170 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1221 19:47:11.445280  127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1221 19:47:11.779673  127170 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1221 19:47:11.779708  127170 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1221 19:47:11.965025  127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1221 19:47:14.734067  127170 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1221 19:47:14.737199  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:14.737695  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:47:14.737724  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:14.737902  127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
	I1221 19:47:15.176346  127170 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1221 19:47:15.398929  127170 addons.go:239] Setting addon gcp-auth=true in "addons-659513"
	I1221 19:47:15.399017  127170 host.go:66] Checking if "addons-659513" exists ...
	I1221 19:47:15.401135  127170 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1221 19:47:15.403726  127170 main.go:144] libmachine: domain addons-659513 has defined MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:15.404170  127170 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ba:4c", ip: ""} in network mk-addons-659513: {Iface:virbr1 ExpiryTime:2025-12-21 20:46:41 +0000 UTC Type:0 Mac:52:54:00:56:ba:4c Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-659513 Clientid:01:52:54:00:56:ba:4c}
	I1221 19:47:15.404208  127170 main.go:144] libmachine: domain addons-659513 has defined IP address 192.168.39.164 and MAC address 52:54:00:56:ba:4c in network mk-addons-659513
	I1221 19:47:15.404432  127170 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/addons-659513/id_rsa Username:docker}
	I1221 19:47:15.677540  127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.430146382s)
	I1221 19:47:15.677682  127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.373319843s)
	I1221 19:47:15.677731  127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.647924055s)
	I1221 19:47:15.677748  127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.277585166s)
	I1221 19:47:15.677763  127170 addons.go:495] Verifying addon ingress=true in "addons-659513"
	I1221 19:47:15.677875  127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.26147828s)
	I1221 19:47:15.677842  127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.266542427s)
	I1221 19:47:15.677947  127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.254981457s)
	I1221 19:47:15.677983  127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.244374424s)
	I1221 19:47:15.678035  127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.125273778s)
	I1221 19:47:15.678074  127170 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.835410534s)
	I1221 19:47:15.678095  127170 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.83536152s)
	I1221 19:47:15.678118  127170 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1221 19:47:15.678254  127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.575295358s)
	I1221 19:47:15.678328  127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.176537181s)
	I1221 19:47:15.678358  127170 addons.go:495] Verifying addon registry=true in "addons-659513"
	I1221 19:47:15.678887  127170 node_ready.go:35] waiting up to 6m0s for node "addons-659513" to be "Ready" ...
	I1221 19:47:15.678449  127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.72204242s)
	I1221 19:47:15.678957  127170 addons.go:495] Verifying addon metrics-server=true in "addons-659513"
	I1221 19:47:15.678527  127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.423910096s)
	I1221 19:47:15.679768  127170 out.go:179] * Verifying ingress addon...
	I1221 19:47:15.680690  127170 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-659513 service yakd-dashboard -n yakd-dashboard
	
	I1221 19:47:15.680690  127170 out.go:179] * Verifying registry addon...
	I1221 19:47:15.682005  127170 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1221 19:47:15.683208  127170 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1221 19:47:15.686907  127170 node_ready.go:49] node "addons-659513" is "Ready"
	I1221 19:47:15.686933  127170 node_ready.go:38] duration metric: took 8.002783ms for node "addons-659513" to be "Ready" ...
	I1221 19:47:15.686949  127170 api_server.go:52] waiting for apiserver process to appear ...
	I1221 19:47:15.686988  127170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 19:47:15.716365  127170 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1221 19:47:15.716391  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:15.724344  127170 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1221 19:47:15.724366  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1221 19:47:15.762064  127170 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1221 19:47:16.195857  127170 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-659513" context rescaled to 1 replicas
	I1221 19:47:16.265102  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:16.267651  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:16.718582  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:16.718655  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:16.719351  127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.236526092s)
	W1221 19:47:16.719403  127170 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1221 19:47:16.719442  127170 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1221 19:47:17.064107  127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1221 19:47:17.192991  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:17.193168  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:17.703094  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:17.703449  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:17.761575  127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.796483925s)
	I1221 19:47:17.761634  127170 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-659513"
	I1221 19:47:17.761643  127170 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.360472517s)
	I1221 19:47:17.761710  127170 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.074701489s)
	I1221 19:47:17.761740  127170 api_server.go:72] duration metric: took 10.473697059s to wait for apiserver process to appear ...
	I1221 19:47:17.761796  127170 api_server.go:88] waiting for apiserver healthz status ...
	I1221 19:47:17.761824  127170 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I1221 19:47:17.763165  127170 out.go:179] * Verifying csi-hostpath-driver addon...
	I1221 19:47:17.763172  127170 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1221 19:47:17.764756  127170 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1221 19:47:17.765531  127170 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1221 19:47:17.766343  127170 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1221 19:47:17.766364  127170 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1221 19:47:17.770759  127170 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
	ok
	I1221 19:47:17.771821  127170 api_server.go:141] control plane version: v1.34.3
	I1221 19:47:17.771843  127170 api_server.go:131] duration metric: took 10.040248ms to wait for apiserver health ...
	I1221 19:47:17.771853  127170 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 19:47:17.783077  127170 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1221 19:47:17.783110  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:17.803103  127170 system_pods.go:59] 20 kube-system pods found
	I1221 19:47:17.803153  127170 system_pods.go:61] "amd-gpu-device-plugin-96g9f" [ae1a4e49-3725-4452-ade4-01b3af2dfe3f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1221 19:47:17.803167  127170 system_pods.go:61] "coredns-66bc5c9577-26xrr" [dc85b7d7-8740-43f9-82ce-8e57e4f1a4d1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 19:47:17.803178  127170 system_pods.go:61] "coredns-66bc5c9577-wmlm4" [8d0b39bf-67af-49f4-bad3-27f7b7667bfd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 19:47:17.803187  127170 system_pods.go:61] "csi-hostpath-attacher-0" [bd327965-2ca8-4ea6-a549-0280d8857276] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1221 19:47:17.803199  127170 system_pods.go:61] "csi-hostpath-resizer-0" [9e4fa2a5-9ead-47f3-976f-8a05bf1aefe8] Pending
	I1221 19:47:17.803207  127170 system_pods.go:61] "csi-hostpathplugin-8pbdl" [9db03d51-8cde-4534-a9b4-d5e1468a87b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1221 19:47:17.803214  127170 system_pods.go:61] "etcd-addons-659513" [d6e79a60-b93a-4d72-9a6a-27a83696ac1f] Running
	I1221 19:47:17.803224  127170 system_pods.go:61] "kube-apiserver-addons-659513" [2f7bb8d7-56ea-4e2d-be31-3abb043240f9] Running
	I1221 19:47:17.803230  127170 system_pods.go:61] "kube-controller-manager-addons-659513" [f8a6a122-5dd0-433a-852b-1265788f9d30] Running
	I1221 19:47:17.803238  127170 system_pods.go:61] "kube-ingress-dns-minikube" [4c506cde-8495-4847-95bb-99f92a15aeb1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1221 19:47:17.803244  127170 system_pods.go:61] "kube-proxy-fbvb9" [f81d5845-1ca3-4d59-b971-848c73663c2d] Running
	I1221 19:47:17.803250  127170 system_pods.go:61] "kube-scheduler-addons-659513" [230ee0ee-e72a-4131-a7ff-5774926289ad] Running
	I1221 19:47:17.803259  127170 system_pods.go:61] "metrics-server-85b7d694d7-v72tn" [68904163-d7f9-411e-9a48-c014af0cef06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1221 19:47:17.803267  127170 system_pods.go:61] "nvidia-device-plugin-daemonset-ql2hl" [76700fd6-090f-485b-97c5-07cea983a62e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1221 19:47:17.803275  127170 system_pods.go:61] "registry-6b586f9694-dvnl4" [56216ff6-db76-45d5-945d-2bf21a023ebf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1221 19:47:17.803283  127170 system_pods.go:61] "registry-creds-764b6fb674-xk9c7" [d8a47e94-fba3-4da4-9a39-6f7db289cd2f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1221 19:47:17.803304  127170 system_pods.go:61] "registry-proxy-kntxd" [1893f6cf-53cb-4c2d-acea-6739ff305373] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1221 19:47:17.803312  127170 system_pods.go:61] "snapshot-controller-7d9fbc56b8-k6z6g" [661ae2cd-24eb-42e0-bcb7-8eb9cda59e83] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1221 19:47:17.803320  127170 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rr67d" [0fe3f2c2-9357-4203-83e5-791658b87779] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1221 19:47:17.803328  127170 system_pods.go:61] "storage-provisioner" [97ccdeb0-0aa9-4509-9ca8-0d067721e67a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 19:47:17.803337  127170 system_pods.go:74] duration metric: took 31.476578ms to wait for pod list to return data ...
	I1221 19:47:17.803348  127170 default_sa.go:34] waiting for default service account to be created ...
	I1221 19:47:17.809983  127170 default_sa.go:45] found service account: "default"
	I1221 19:47:17.810010  127170 default_sa.go:55] duration metric: took 6.654975ms for default service account to be created ...
	I1221 19:47:17.810020  127170 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 19:47:17.845997  127170 system_pods.go:86] 20 kube-system pods found
	I1221 19:47:17.846035  127170 system_pods.go:89] "amd-gpu-device-plugin-96g9f" [ae1a4e49-3725-4452-ade4-01b3af2dfe3f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1221 19:47:17.846044  127170 system_pods.go:89] "coredns-66bc5c9577-26xrr" [dc85b7d7-8740-43f9-82ce-8e57e4f1a4d1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 19:47:17.846052  127170 system_pods.go:89] "coredns-66bc5c9577-wmlm4" [8d0b39bf-67af-49f4-bad3-27f7b7667bfd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 19:47:17.846057  127170 system_pods.go:89] "csi-hostpath-attacher-0" [bd327965-2ca8-4ea6-a549-0280d8857276] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1221 19:47:17.846062  127170 system_pods.go:89] "csi-hostpath-resizer-0" [9e4fa2a5-9ead-47f3-976f-8a05bf1aefe8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1221 19:47:17.846067  127170 system_pods.go:89] "csi-hostpathplugin-8pbdl" [9db03d51-8cde-4534-a9b4-d5e1468a87b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1221 19:47:17.846075  127170 system_pods.go:89] "etcd-addons-659513" [d6e79a60-b93a-4d72-9a6a-27a83696ac1f] Running
	I1221 19:47:17.846079  127170 system_pods.go:89] "kube-apiserver-addons-659513" [2f7bb8d7-56ea-4e2d-be31-3abb043240f9] Running
	I1221 19:47:17.846083  127170 system_pods.go:89] "kube-controller-manager-addons-659513" [f8a6a122-5dd0-433a-852b-1265788f9d30] Running
	I1221 19:47:17.846091  127170 system_pods.go:89] "kube-ingress-dns-minikube" [4c506cde-8495-4847-95bb-99f92a15aeb1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1221 19:47:17.846095  127170 system_pods.go:89] "kube-proxy-fbvb9" [f81d5845-1ca3-4d59-b971-848c73663c2d] Running
	I1221 19:47:17.846101  127170 system_pods.go:89] "kube-scheduler-addons-659513" [230ee0ee-e72a-4131-a7ff-5774926289ad] Running
	I1221 19:47:17.846108  127170 system_pods.go:89] "metrics-server-85b7d694d7-v72tn" [68904163-d7f9-411e-9a48-c014af0cef06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1221 19:47:17.846117  127170 system_pods.go:89] "nvidia-device-plugin-daemonset-ql2hl" [76700fd6-090f-485b-97c5-07cea983a62e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1221 19:47:17.846126  127170 system_pods.go:89] "registry-6b586f9694-dvnl4" [56216ff6-db76-45d5-945d-2bf21a023ebf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1221 19:47:17.846133  127170 system_pods.go:89] "registry-creds-764b6fb674-xk9c7" [d8a47e94-fba3-4da4-9a39-6f7db289cd2f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1221 19:47:17.846138  127170 system_pods.go:89] "registry-proxy-kntxd" [1893f6cf-53cb-4c2d-acea-6739ff305373] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1221 19:47:17.846144  127170 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k6z6g" [661ae2cd-24eb-42e0-bcb7-8eb9cda59e83] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1221 19:47:17.846151  127170 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rr67d" [0fe3f2c2-9357-4203-83e5-791658b87779] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1221 19:47:17.846155  127170 system_pods.go:89] "storage-provisioner" [97ccdeb0-0aa9-4509-9ca8-0d067721e67a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 19:47:17.846163  127170 system_pods.go:126] duration metric: took 36.137486ms to wait for k8s-apps to be running ...
	I1221 19:47:17.846172  127170 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 19:47:17.846226  127170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 19:47:17.938224  127170 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1221 19:47:17.938269  127170 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1221 19:47:18.021696  127170 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1221 19:47:18.021728  127170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1221 19:47:18.095285  127170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1221 19:47:18.192036  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:18.192631  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:18.271019  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:18.688359  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:18.691190  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:18.774465  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:19.095205  127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.031038885s)
	I1221 19:47:19.095315  127170 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.249059717s)
	I1221 19:47:19.095347  127170 system_svc.go:56] duration metric: took 1.249171811s WaitForService to wait for kubelet
	I1221 19:47:19.095357  127170 kubeadm.go:587] duration metric: took 11.807314269s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 19:47:19.095376  127170 node_conditions.go:102] verifying NodePressure condition ...
	I1221 19:47:19.107387  127170 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1221 19:47:19.107419  127170 node_conditions.go:123] node cpu capacity is 2
	I1221 19:47:19.107434  127170 node_conditions.go:105] duration metric: took 12.052562ms to run NodePressure ...
	I1221 19:47:19.107446  127170 start.go:242] waiting for startup goroutines ...
	I1221 19:47:19.192610  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:19.206910  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:19.292979  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:19.325247  127170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.229921101s)
	I1221 19:47:19.326273  127170 addons.go:495] Verifying addon gcp-auth=true in "addons-659513"
	I1221 19:47:19.328264  127170 out.go:179] * Verifying gcp-auth addon...
	I1221 19:47:19.329891  127170 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1221 19:47:19.338528  127170 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1221 19:47:19.338546  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:19.688356  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:19.690019  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:19.772735  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:19.837383  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:20.190283  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:20.192016  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:20.272580  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:20.334811  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:20.686245  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:20.687961  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:20.773386  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:20.837847  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:21.198331  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:21.198934  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:21.273259  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:21.334079  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:21.688724  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:21.689418  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:21.775431  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:21.836591  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:22.186958  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:22.189471  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:22.269921  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:22.334798  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:22.688224  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:22.689079  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:22.770684  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:22.834632  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:23.193585  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:23.197271  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:23.273043  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:23.336042  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:23.686581  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:23.687581  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:23.770722  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:23.836325  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:24.189409  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:24.190627  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:24.290113  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:24.334472  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:24.687015  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:24.687087  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:24.769647  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:24.834122  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:25.192066  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:25.192207  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:25.295883  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:25.334330  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:25.686890  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:25.688989  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:25.770217  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:25.836003  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:26.190005  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:26.190157  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:26.291476  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:26.335912  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:26.686546  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:26.687207  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:26.770314  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:26.834969  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:27.188542  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:27.189269  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:27.271544  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:27.335133  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:27.686546  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:27.687516  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:27.772192  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:27.835907  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:28.374683  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:28.377142  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:28.377322  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:28.378399  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:28.688909  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:28.689832  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:28.769656  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:28.842153  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:29.189785  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:29.189803  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:29.271235  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:29.337888  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:29.685411  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:29.687830  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:29.769726  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:29.837390  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:30.258158  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:30.258386  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:30.358780  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:30.359915  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:30.686889  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:30.687030  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:30.769793  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:30.834254  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:31.190444  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:31.191408  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:31.269718  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:31.333811  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:31.686150  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:31.688479  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:31.772084  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:31.835402  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:32.243342  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:32.244721  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:32.272259  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:32.342405  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:32.690645  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:32.690684  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:32.770264  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:32.835389  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:33.192566  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:33.193048  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:33.271650  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:33.333391  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:33.688268  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:33.690248  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:33.770081  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:33.834932  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:34.187525  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:34.189108  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:34.270692  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:34.334913  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:34.688479  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:34.689216  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:34.770803  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:34.835369  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:35.189720  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:35.190559  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:35.274392  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:35.333328  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:35.686637  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:35.689754  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:35.769554  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:35.834269  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:36.186710  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:36.189388  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:36.412150  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:36.412734  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:36.689675  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:36.689931  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:36.771526  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:36.833911  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:37.189133  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:37.191957  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:37.270778  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:37.334850  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:37.686135  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:37.688330  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:37.771641  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:37.834288  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:38.187439  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:38.190974  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:38.272442  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:38.336330  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:39.000910  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:39.001368  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:39.001927  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:39.002535  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:39.187712  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:39.195436  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:39.289754  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:39.333514  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:39.686676  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:39.689523  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:39.770306  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:39.839372  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:40.187464  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:40.187477  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:40.270514  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:40.333253  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:40.690067  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:40.691052  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:40.771584  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:40.837943  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:41.190918  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:41.191156  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:41.270970  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:41.337019  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:41.685885  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:41.687630  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:41.770753  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:41.835313  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:42.188330  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:42.191033  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:42.273002  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:42.333854  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:42.688567  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:42.688761  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:42.788822  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:42.833917  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:43.189117  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:43.189624  127170 kapi.go:107] duration metric: took 27.506411084s to wait for kubernetes.io/minikube-addons=registry ...
	I1221 19:47:43.290180  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:43.334182  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:43.686078  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:43.772026  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:43.835472  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:44.187252  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:44.269731  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:44.337914  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:44.686588  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:44.773877  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:44.833919  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:45.187140  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:45.288529  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:45.333392  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:45.690197  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:45.770774  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:45.838042  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:46.190113  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:46.272773  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:46.336573  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:46.689532  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:46.774677  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:46.835292  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:47.190705  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:47.273313  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:47.347816  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:47.689074  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:47.770870  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:47.836939  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:48.188168  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:48.384536  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:48.385798  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:48.691424  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:48.790134  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:48.833125  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:49.190262  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:49.278840  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:49.334843  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:49.686039  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:49.770075  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:49.834636  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:50.188987  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:50.270968  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:50.336650  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:50.958722  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:50.959184  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:50.960154  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:51.192612  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:51.270316  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:51.335687  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:51.686708  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:51.771074  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:51.838495  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:52.186632  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:52.270778  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:52.335533  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:52.688109  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:53.004230  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:53.006171  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:53.188555  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:53.289098  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:53.389578  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:53.692124  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:53.775630  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:53.837274  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:54.187691  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:54.269996  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:54.334628  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:54.694480  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:54.794383  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:54.837652  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:55.189248  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:55.274658  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:55.333942  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:55.724077  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:55.769877  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:55.841420  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:56.186283  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:56.272664  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:56.336112  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:56.686659  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:56.775727  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:56.873457  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:57.188555  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:57.271080  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:57.333854  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:57.686305  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:57.771338  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:57.834242  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:58.186119  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:58.271386  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:58.336441  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:58.689280  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:58.772570  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:58.840136  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:59.189050  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:59.290081  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:59.390912  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:59.687580  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:59.770539  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:59.839055  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:00.186543  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:00.273104  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:48:00.337400  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:00.691097  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:00.773335  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:48:00.836259  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:01.188795  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:01.288116  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:48:01.333123  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:01.691097  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:01.769875  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:48:01.872453  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:02.186691  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:02.272810  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:48:02.333817  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:02.686681  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:02.770062  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:48:02.832945  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:03.189798  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:03.290544  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:48:03.395959  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:03.688057  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:03.770571  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:48:03.834552  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:04.187411  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:04.274604  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:48:04.335689  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:04.687977  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:04.789367  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:48:04.832672  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:05.186591  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:05.270442  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:48:05.334678  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:05.686771  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:05.770298  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:48:05.838161  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:06.186108  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:06.271662  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:48:06.340546  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:06.688024  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:06.769472  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:48:06.833768  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:07.187208  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:07.288222  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:48:07.388431  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:07.688042  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:07.770791  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:48:07.835270  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:08.187695  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:08.270521  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:48:08.334572  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:08.687147  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:08.769775  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:48:08.834042  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:09.187275  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:09.269777  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:48:09.337691  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:09.686807  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:09.772431  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:48:09.835675  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:10.187163  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:10.269578  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:48:10.334787  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:10.686326  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:10.769518  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:48:10.833274  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:11.185764  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:11.269430  127170 kapi.go:107] duration metric: took 53.503902518s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1221 19:48:11.333395  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:11.685773  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:11.833941  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:12.185645  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:12.333626  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:12.686857  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:12.834531  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:13.186643  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:13.336080  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:13.686088  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:13.834038  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:14.185170  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:14.333908  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:14.686280  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:14.833136  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:15.186266  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:15.333992  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:15.686562  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:15.836624  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:16.186587  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:16.334050  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:16.685982  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:16.833869  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:17.186444  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:17.333886  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:17.686200  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:17.835068  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:18.185891  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:18.333826  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:18.686721  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:18.833803  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:19.185530  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:19.333799  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:19.686365  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:19.837464  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:20.186201  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:20.334673  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:20.687058  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:20.833782  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:21.187740  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:21.337252  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:21.688698  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:21.835436  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:22.187689  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:22.338852  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:22.690141  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:22.839285  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:23.187072  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:23.334304  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:23.687007  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:23.838261  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:24.188104  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:24.334403  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:24.689315  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:24.844638  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:25.186156  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:25.333504  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:25.688574  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:25.836883  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:26.187057  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:26.333888  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:26.686641  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:26.834347  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:27.337693  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:27.338239  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:27.688219  127170 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:48:27.838590  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:28.186425  127170 kapi.go:107] duration metric: took 1m12.504424762s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1221 19:48:28.333143  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:28.838595  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:29.336226  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:29.836393  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:30.333900  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:30.834884  127170 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:48:31.334121  127170 kapi.go:107] duration metric: took 1m12.004226789s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1221 19:48:31.336072  127170 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-659513 cluster.
	I1221 19:48:31.337432  127170 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1221 19:48:31.338687  127170 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1221 19:48:31.339974  127170 out.go:179] * Enabled addons: storage-provisioner, inspektor-gadget, cloud-spanner, ingress-dns, registry-creds, amd-gpu-device-plugin, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1221 19:48:31.341084  127170 addons.go:530] duration metric: took 1m24.053006477s for enable addons: enabled=[storage-provisioner inspektor-gadget cloud-spanner ingress-dns registry-creds amd-gpu-device-plugin nvidia-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1221 19:48:31.341129  127170 start.go:247] waiting for cluster config update ...
	I1221 19:48:31.341159  127170 start.go:256] writing updated cluster config ...
	I1221 19:48:31.341477  127170 ssh_runner.go:195] Run: rm -f paused
	I1221 19:48:31.348768  127170 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 19:48:31.352595  127170 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-26xrr" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:48:31.359570  127170 pod_ready.go:94] pod "coredns-66bc5c9577-26xrr" is "Ready"
	I1221 19:48:31.359594  127170 pod_ready.go:86] duration metric: took 6.977531ms for pod "coredns-66bc5c9577-26xrr" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:48:31.361757  127170 pod_ready.go:83] waiting for pod "etcd-addons-659513" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:48:31.366449  127170 pod_ready.go:94] pod "etcd-addons-659513" is "Ready"
	I1221 19:48:31.366470  127170 pod_ready.go:86] duration metric: took 4.693992ms for pod "etcd-addons-659513" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:48:31.368572  127170 pod_ready.go:83] waiting for pod "kube-apiserver-addons-659513" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:48:31.376619  127170 pod_ready.go:94] pod "kube-apiserver-addons-659513" is "Ready"
	I1221 19:48:31.376641  127170 pod_ready.go:86] duration metric: took 8.052067ms for pod "kube-apiserver-addons-659513" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:48:31.380627  127170 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-659513" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:48:31.753934  127170 pod_ready.go:94] pod "kube-controller-manager-addons-659513" is "Ready"
	I1221 19:48:31.753965  127170 pod_ready.go:86] duration metric: took 373.316303ms for pod "kube-controller-manager-addons-659513" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:48:31.953852  127170 pod_ready.go:83] waiting for pod "kube-proxy-fbvb9" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:48:32.354648  127170 pod_ready.go:94] pod "kube-proxy-fbvb9" is "Ready"
	I1221 19:48:32.354677  127170 pod_ready.go:86] duration metric: took 400.79518ms for pod "kube-proxy-fbvb9" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:48:32.553601  127170 pod_ready.go:83] waiting for pod "kube-scheduler-addons-659513" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:48:32.952951  127170 pod_ready.go:94] pod "kube-scheduler-addons-659513" is "Ready"
	I1221 19:48:32.952984  127170 pod_ready.go:86] duration metric: took 399.351812ms for pod "kube-scheduler-addons-659513" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:48:32.952997  127170 pod_ready.go:40] duration metric: took 1.604197504s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 19:48:32.999372  127170 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1221 19:48:33.001297  127170 out.go:179] * Done! kubectl is now configured to use "addons-659513" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.855720652Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1581b601-ba24-4be7-91ca-0fea0420369c name=/runtime.v1.RuntimeService/Version
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.858523327Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4cae04c3-5a13-4241-ae10-84cab9d56057 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.861491881Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766346691861467321,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:551108,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4cae04c3-5a13-4241-ae10-84cab9d56057 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.863414415Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b83c8337-67f7-4552-bacd-bc6732abb512 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.863487785Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b83c8337-67f7-4552-bacd-bc6732abb512 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.863753159Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24cbfb986d5c3ca3914fc9c982bc0f327cab22d1ddd350a5101dea571b531ae4,PodSandboxId:4a41ca16c86e05833fea9885e77582ab9e4210b58533c294dad1e98eb8c23e08,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766346551717301367,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33f3ec72-704c-4201-8ff2-47eac4b359fe,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae51e8a03b5795078d92e305dc7f1e5145cfd39ed842e2f6cd4495696e266f2,PodSandboxId:7888b298e1b5aa605057f03f82f11340b72fe1725c7675291c5bfa317e408079,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766346516355499449,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7f347285-8b81-4c24-9b59-da519e7b35b0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2ff9b28479efc2d0a0fc471665c02560315c9bd5ab4199b166f0948ed20421,PodSandboxId:d9ecfb70d267b99db7ebc525d5264fa9a7bbcbe02ed1a5b3440a3b1dbc5681cd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1766346507489445293,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-s7ffl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da33936d-a439-40f8-8c05-f7eb37c2a965,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b0d4f54935266cbc9e1f226622aaebfc1148b657bd58b134d2342d3e25a3f81c,PodSandboxId:826c23ce1e03655df44ee44d175bf9e26249c1c8bf7a6a2728bdff10b60fb9d0,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1766346475930855575,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xlmpc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c224b53b-30a7-455e-a46f-71e29fefeebd,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66cb039aacf2b693177d81815eac065ba13bfc5a764199e4c8195fd6b73e4e2e,PodSandboxId:d660c400d7e267153678575b2516776993a2d1b947acd15fe08199d37be2a12a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1766346474567680497,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5skzk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9804e45-e681-4a75-95bc-7d01cadcb23a,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2554c908ec62494980fa93068273fa8ea1b82eed4e4bd6c217c4322a493b009,PodSandboxId:b74c477f14c08fd8448306faeee5b5dafbc9d239ed6d56eac75a37a62b92ecf3,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1766346459113621590,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c506cde-8495-4847-95bb-99f92a15aeb1,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71a3f6b64ed7664ee86b4e70656a368783bc04438140179b672d70912ad173a,PodSandboxId:f10da7fce20989d8abbe13fca81725b8d162ae31e6c482b520a95f2d5a934b20,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1766346444128697039,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-96g9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae1a4e49-3725-4452-ade4-01b3af2dfe3f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:821adec83773446bd435ef05ab329e5d395b6617013fdb8fb83cfe0e620f4c54,PodSandboxId:92681fb3c28b7795a449b0f25cdffe478d433a56caeacac88293fbea4b4a9ee1,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766346439119695360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ccdeb0-0aa9-4509-9ca8-0d067721e67a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaf270f354b50d2a160ee904351fae497bcb13ccd6a6225ad9d4d85ddc5a653f,PodSandboxId:cd23883be15328c292099c0fdf315f96f42d7be535f9563b51624c899365501a,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766346428931047668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-26xrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85b7d7-8740-43f9-82ce-8e57e4f1a4d1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:944524acd2e98b5a8fbda9f53aa5af06093335f472b9c4739bf44311faf57c5f,PodSandboxId:377a6c7a47a554763ff617d6a99ad5dcf3bd6b6c7f46db37c6f1a3d8354c0436,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766346428043980106,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbvb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f81d5845-1ca3-4d59-b971-848c73663c2d,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:835c8c15bbf37d26aca711aed08532cb1b32be70b119565fe2f14cdba5136552,PodSandboxId:f3478431553a6f4f25bb09dff168fda89445871726fd8e3c5b24da2d1e74bb58,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766346416335911432,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8a4cda82f44637052e031d96df1f39,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5546673aec525016ac3db18f88a4fc01cedc9678c9eb422c032127aa209ca951,PodSandboxId:7a9712131b66b91e098794ea27db2bbd0ec954d8db079739381abe579aee2de2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766346416321568471,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e00703bc6d857e7a94b8aa3578cd0ba,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container
.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e3f1b192dcb7acee686c577bf7a411a3d775b35627c76e70a7d5588ed42e81,PodSandboxId:dd2945bb4b694fb25f86847766b25f3c7a558ea7a9d2f93d575225c673771b39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766346416307836253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5f17957e871bfb19e971
bde6d59acab,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70cbc562e70d050f91338c415852cd26b7e7f1fdea65d9883e7b97d79508e7a6,PodSandboxId:3a3934ff8e8846c95df0f460c16f082fc042910df69600567798ae6faea3e246,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766346416296051360,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a83fa93dc395b9c19eae8f42e5ac0af,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b83c8337-67f7-4552-bacd-bc6732abb512 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.923573046Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c88df894-2ecb-4b9b-ab5d-50898cb7c652 name=/runtime.v1.RuntimeService/Version
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.923661187Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c88df894-2ecb-4b9b-ab5d-50898cb7c652 name=/runtime.v1.RuntimeService/Version
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.926485682Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f34eea82-0e3e-4e7c-a5ee-12384295183e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.927875096Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766346691927845160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:551108,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f34eea82-0e3e-4e7c-a5ee-12384295183e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.930318133Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b0a6f16-db72-47fa-ae44-d6d0cf8a31bb name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.930381178Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b0a6f16-db72-47fa-ae44-d6d0cf8a31bb name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.930656073Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24cbfb986d5c3ca3914fc9c982bc0f327cab22d1ddd350a5101dea571b531ae4,PodSandboxId:4a41ca16c86e05833fea9885e77582ab9e4210b58533c294dad1e98eb8c23e08,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766346551717301367,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33f3ec72-704c-4201-8ff2-47eac4b359fe,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae51e8a03b5795078d92e305dc7f1e5145cfd39ed842e2f6cd4495696e266f2,PodSandboxId:7888b298e1b5aa605057f03f82f11340b72fe1725c7675291c5bfa317e408079,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766346516355499449,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7f347285-8b81-4c24-9b59-da519e7b35b0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2ff9b28479efc2d0a0fc471665c02560315c9bd5ab4199b166f0948ed20421,PodSandboxId:d9ecfb70d267b99db7ebc525d5264fa9a7bbcbe02ed1a5b3440a3b1dbc5681cd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1766346507489445293,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-s7ffl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da33936d-a439-40f8-8c05-f7eb37c2a965,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b0d4f54935266cbc9e1f226622aaebfc1148b657bd58b134d2342d3e25a3f81c,PodSandboxId:826c23ce1e03655df44ee44d175bf9e26249c1c8bf7a6a2728bdff10b60fb9d0,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1766346475930855575,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xlmpc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c224b53b-30a7-455e-a46f-71e29fefeebd,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66cb039aacf2b693177d81815eac065ba13bfc5a764199e4c8195fd6b73e4e2e,PodSandboxId:d660c400d7e267153678575b2516776993a2d1b947acd15fe08199d37be2a12a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1766346474567680497,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5skzk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9804e45-e681-4a75-95bc-7d01cadcb23a,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2554c908ec62494980fa93068273fa8ea1b82eed4e4bd6c217c4322a493b009,PodSandboxId:b74c477f14c08fd8448306faeee5b5dafbc9d239ed6d56eac75a37a62b92ecf3,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1766346459113621590,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c506cde-8495-4847-95bb-99f92a15aeb1,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71a3f6b64ed7664ee86b4e70656a368783bc04438140179b672d70912ad173a,PodSandboxId:f10da7fce20989d8abbe13fca81725b8d162ae31e6c482b520a95f2d5a934b20,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1766346444128697039,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-96g9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae1a4e49-3725-4452-ade4-01b3af2dfe3f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:821adec83773446bd435ef05ab329e5d395b6617013fdb8fb83cfe0e620f4c54,PodSandboxId:92681fb3c28b7795a449b0f25cdffe478d433a56caeacac88293fbea4b4a9ee1,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766346439119695360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ccdeb0-0aa9-4509-9ca8-0d067721e67a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaf270f354b50d2a160ee904351fae497bcb13ccd6a6225ad9d4d85ddc5a653f,PodSandboxId:cd23883be15328c292099c0fdf315f96f42d7be535f9563b51624c899365501a,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766346428931047668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-26xrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85b7d7-8740-43f9-82ce-8e57e4f1a4d1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:944524acd2e98b5a8fbda9f53aa5af06093335f472b9c4739bf44311faf57c5f,PodSandboxId:377a6c7a47a554763ff617d6a99ad5dcf3bd6b6c7f46db37c6f1a3d8354c0436,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766346428043980106,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbvb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f81d5845-1ca3-4d59-b971-848c73663c2d,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:835c8c15bbf37d26aca711aed08532cb1b32be70b119565fe2f14cdba5136552,PodSandboxId:f3478431553a6f4f25bb09dff168fda89445871726fd8e3c5b24da2d1e74bb58,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766346416335911432,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8a4cda82f44637052e031d96df1f39,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5546673aec525016ac3db18f88a4fc01cedc9678c9eb422c032127aa209ca951,PodSandboxId:7a9712131b66b91e098794ea27db2bbd0ec954d8db079739381abe579aee2de2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766346416321568471,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e00703bc6d857e7a94b8aa3578cd0ba,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container
.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e3f1b192dcb7acee686c577bf7a411a3d775b35627c76e70a7d5588ed42e81,PodSandboxId:dd2945bb4b694fb25f86847766b25f3c7a558ea7a9d2f93d575225c673771b39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766346416307836253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5f17957e871bfb19e971
bde6d59acab,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70cbc562e70d050f91338c415852cd26b7e7f1fdea65d9883e7b97d79508e7a6,PodSandboxId:3a3934ff8e8846c95df0f460c16f082fc042910df69600567798ae6faea3e246,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766346416296051360,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a83fa93dc395b9c19eae8f42e5ac0af,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b0a6f16-db72-47fa-ae44-d6d0cf8a31bb name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.932540215Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=c42d2696-2092-4a2f-88f9-c7976058c31e name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.933517634Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b8ca4d2cd5b7e65811f69a161ce13c537a5e4ca4e7948d0356f617f4146f81d8,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-qfn7w,Uid:1432962d-567f-41c9-8e1a-86dc0ebcb6c5,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766346691014061737,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-qfn7w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1432962d-567f-41c9-8e1a-86dc0ebcb6c5,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-21T19:51:30.697563944Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4a41ca16c86e05833fea9885e77582ab9e4210b58533c294dad1e98eb8c23e08,Metadata:&PodSandboxMetadata{Name:nginx,Uid:33f3ec72-704c-4201-8ff2-47eac4b359fe,Namespace:default,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1766346549215672076,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33f3ec72-704c-4201-8ff2-47eac4b359fe,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-21T19:49:08.898465472Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7888b298e1b5aa605057f03f82f11340b72fe1725c7675291c5bfa317e408079,Metadata:&PodSandboxMetadata{Name:busybox,Uid:7f347285-8b81-4c24-9b59-da519e7b35b0,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766346513928437485,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7f347285-8b81-4c24-9b59-da519e7b35b0,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-21T19:48:33.605837987Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d9ecfb70d267b99db7ebc
525d5264fa9a7bbcbe02ed1a5b3440a3b1dbc5681cd,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-85d4c799dd-s7ffl,Uid:da33936d-a439-40f8-8c05-f7eb37c2a965,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766346500318989097,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-s7ffl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da33936d-a439-40f8-8c05-f7eb37c2a965,pod-template-hash: 85d4c799dd,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-21T19:47:15.488185624Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:826c23ce1e03655df44ee44d175bf9e26249c1c8bf7a6a2728bdff10b60fb9d0,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-xlmpc,Uid:c224b53b-30a7-455e-a46f-71e29fefeebd,Namespace:ingress-nginx,Attempt:0,},St
ate:SANDBOX_NOTREADY,CreatedAt:1766346436937730430,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: a01aa6f7-e966-4492-ac72-e5e3ceabae8a,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: a01aa6f7-e966-4492-ac72-e5e3ceabae8a,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-xlmpc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c224b53b-30a7-455e-a46f-71e29fefeebd,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-21T19:47:15.571700364Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d660c400d7e267153678575b2516776993a2d1b947acd15fe08199d37be2a12a,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-5skzk,Uid:c9804e45-e681-4a75-95bc-7d01cadcb23a,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,Crea
tedAt:1766346436872108949,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 8986cafc-6e33-4101-a110-6119660391f7,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: 8986cafc-6e33-4101-a110-6119660391f7,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-5skzk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9804e45-e681-4a75-95bc-7d01cadcb23a,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-21T19:47:15.553493955Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:92681fb3c28b7795a449b0f25cdffe478d433a56caeacac88293fbea4b4a9ee1,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:97ccdeb0-0aa9-4509-9ca8-0d067721e67a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766346435865654862,Labels:map[string]
string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ccdeb0-0aa9-4509-9ca8-0d067721e67a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io
/config.seen: 2025-12-21T19:47:13.538961128Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b74c477f14c08fd8448306faeee5b5dafbc9d239ed6d56eac75a37a62b92ecf3,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:4c506cde-8495-4847-95bb-99f92a15aeb1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766346433760756713,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c506cde-8495-4847-95bb-99f92a15aeb1,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":
\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"hostPort\":53,\"protocol\":\"UDP\"}],\"volumeMounts\":[{\"mountPath\":\"/config\",\"name\":\"minikube-ingress-dns-config-volume\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\",\"volumes\":[{\"configMap\":{\"name\":\"minikube-ingress-dns\"},\"name\":\"minikube-ingress-dns-config-volume\"}]}}\n,kubernetes.io/config.seen: 2025-12-21T19:47:13.385535624Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f10da7fce20989d8abbe13fca81725b8d162ae31e6c482b520a95f2d5a934b20,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-96g9f,Uid:ae1a4e49-3725-4452-ade4-01b3af2dfe3f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:176634643116959
3450,Labels:map[string]string{controller-revision-hash: 7f87d6fd8d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-96g9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae1a4e49-3725-4452-ade4-01b3af2dfe3f,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-21T19:47:10.815377584Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cd23883be15328c292099c0fdf315f96f42d7be535f9563b51624c899365501a,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-26xrr,Uid:dc85b7d7-8740-43f9-82ce-8e57e4f1a4d1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766346427937419586,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-26xrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85b7d7-8740-43f9-82ce-8e57e4f1a4d1,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[st
ring]string{kubernetes.io/config.seen: 2025-12-21T19:47:07.553463424Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:377a6c7a47a554763ff617d6a99ad5dcf3bd6b6c7f46db37c6f1a3d8354c0436,Metadata:&PodSandboxMetadata{Name:kube-proxy-fbvb9,Uid:f81d5845-1ca3-4d59-b971-848c73663c2d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766346427460627319,Labels:map[string]string{controller-revision-hash: 55c7cb7b75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fbvb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f81d5845-1ca3-4d59-b971-848c73663c2d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-21T19:47:07.107513094Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7a9712131b66b91e098794ea27db2bbd0ec954d8db079739381abe579aee2de2,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-659513,Uid:3e00703bc6d857e7a94b8aa3578cd0ba,Namespace:kube-system,Attemp
t:0,},State:SANDBOX_READY,CreatedAt:1766346416088659150,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e00703bc6d857e7a94b8aa3578cd0ba,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3e00703bc6d857e7a94b8aa3578cd0ba,kubernetes.io/config.seen: 2025-12-21T19:46:55.473412251Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3a3934ff8e8846c95df0f460c16f082fc042910df69600567798ae6faea3e246,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-659513,Uid:6a83fa93dc395b9c19eae8f42e5ac0af,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766346416086460714,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a83fa93dc395b9c19eae8f42
e5ac0af,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6a83fa93dc395b9c19eae8f42e5ac0af,kubernetes.io/config.seen: 2025-12-21T19:46:55.473413132Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dd2945bb4b694fb25f86847766b25f3c7a558ea7a9d2f93d575225c673771b39,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-659513,Uid:f5f17957e871bfb19e971bde6d59acab,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766346416073951811,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5f17957e871bfb19e971bde6d59acab,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.164:8443,kubernetes.io/config.hash: f5f17957e871bfb19e971bde6d59acab,kubernetes.io/config.seen: 2025-12-21T19:46:55.473410502Z,kubernetes.io/config.source: file,},Ru
ntimeHandler:,},&PodSandbox{Id:f3478431553a6f4f25bb09dff168fda89445871726fd8e3c5b24da2d1e74bb58,Metadata:&PodSandboxMetadata{Name:etcd-addons-659513,Uid:9f8a4cda82f44637052e031d96df1f39,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766346416072379807,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8a4cda82f44637052e031d96df1f39,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.164:2379,kubernetes.io/config.hash: 9f8a4cda82f44637052e031d96df1f39,kubernetes.io/config.seen: 2025-12-21T19:46:55.473406852Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c42d2696-2092-4a2f-88f9-c7976058c31e name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.934869518Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b75e2cd0-c6c7-4b47-9c7e-656ae6168a30 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.935187785Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b75e2cd0-c6c7-4b47-9c7e-656ae6168a30 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.935696570Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24cbfb986d5c3ca3914fc9c982bc0f327cab22d1ddd350a5101dea571b531ae4,PodSandboxId:4a41ca16c86e05833fea9885e77582ab9e4210b58533c294dad1e98eb8c23e08,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766346551717301367,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33f3ec72-704c-4201-8ff2-47eac4b359fe,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae51e8a03b5795078d92e305dc7f1e5145cfd39ed842e2f6cd4495696e266f2,PodSandboxId:7888b298e1b5aa605057f03f82f11340b72fe1725c7675291c5bfa317e408079,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1766346516355499449,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7f347285-8b81-4c24-9b59-da519e7b35b0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2ff9b28479efc2d0a0fc471665c02560315c9bd5ab4199b166f0948ed20421,PodSandboxId:d9ecfb70d267b99db7ebc525d5264fa9a7bbcbe02ed1a5b3440a3b1dbc5681cd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1766346507489445293,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-s7ffl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da33936d-a439-40f8-8c05-f7eb37c2a965,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b0d4f54935266cbc9e1f226622aaebfc1148b657bd58b134d2342d3e25a3f81c,PodSandboxId:826c23ce1e03655df44ee44d175bf9e26249c1c8bf7a6a2728bdff10b60fb9d0,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1766346475930855575,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xlmpc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c224b53b-30a7-455e-a46f-71e29fefeebd,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66cb039aacf2b693177d81815eac065ba13bfc5a764199e4c8195fd6b73e4e2e,PodSandboxId:d660c400d7e267153678575b2516776993a2d1b947acd15fe08199d37be2a12a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1766346474567680497,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5skzk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9804e45-e681-4a75-95bc-7d01cadcb23a,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2554c908ec62494980fa93068273fa8ea1b82eed4e4bd6c217c4322a493b009,PodSandboxId:b74c477f14c08fd8448306faeee5b5dafbc9d239ed6d56eac75a37a62b92ecf3,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1766346459113621590,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c506cde-8495-4847-95bb-99f92a15aeb1,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71a3f6b64ed7664ee86b4e70656a368783bc04438140179b672d70912ad173a,PodSandboxId:f10da7fce20989d8abbe13fca81725b8d162ae31e6c482b520a95f2d5a934b20,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1766346444128697039,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-96g9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae1a4e49-3725-4452-ade4-01b3af2dfe3f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:821adec83773446bd435ef05ab329e5d395b6617013fdb8fb83cfe0e620f4c54,PodSandboxId:92681fb3c28b7795a449b0f25cdffe478d433a56caeacac88293fbea4b4a9ee1,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766346439119695360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ccdeb0-0aa9-4509-9ca8-0d067721e67a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaf270f354b50d2a160ee904351fae497bcb13ccd6a6225ad9d4d85ddc5a653f,PodSandboxId:cd23883be15328c292099c0fdf315f96f42d7be535f9563b51624c899365501a,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766346428931047668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-26xrr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85b7d7-8740-43f9-82ce-8e57e4f1a4d1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:944524acd2e98b5a8fbda9f53aa5af06093335f472b9c4739bf44311faf57c5f,PodSandboxId:377a6c7a47a554763ff617d6a99ad5dcf3bd6b6c7f46db37c6f1a3d8354c0436,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766346428043980106,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbvb9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f81d5845-1ca3-4d59-b971-848c73663c2d,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:835c8c15bbf37d26aca711aed08532cb1b32be70b119565fe2f14cdba5136552,PodSandboxId:f3478431553a6f4f25bb09dff168fda89445871726fd8e3c5b24da2d1e74bb58,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766346416335911432,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8a4cda82f44637052e031d96df1f39,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5546673aec525016ac3db18f88a4fc01cedc9678c9eb422c032127aa209ca951,PodSandboxId:7a9712131b66b91e098794ea27db2bbd0ec954d8db079739381abe579aee2de2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766346416321568471,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e00703bc6d857e7a94b8aa3578cd0ba,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container
.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e3f1b192dcb7acee686c577bf7a411a3d775b35627c76e70a7d5588ed42e81,PodSandboxId:dd2945bb4b694fb25f86847766b25f3c7a558ea7a9d2f93d575225c673771b39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766346416307836253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5f17957e871bfb19e971
bde6d59acab,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70cbc562e70d050f91338c415852cd26b7e7f1fdea65d9883e7b97d79508e7a6,PodSandboxId:3a3934ff8e8846c95df0f460c16f082fc042910df69600567798ae6faea3e246,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766346416296051360,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-addons-659513,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a83fa93dc395b9c19eae8f42e5ac0af,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b75e2cd0-c6c7-4b47-9c7e-656ae6168a30 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.937457009Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 1432962d-567f-41c9-8e1a-86dc0ebcb6c5,},},}" file="otel-collector/interceptors.go:62" id=e4addf13-e601-42af-b23c-299b432082b7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.937556367Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b8ca4d2cd5b7e65811f69a161ce13c537a5e4ca4e7948d0356f617f4146f81d8,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-qfn7w,Uid:1432962d-567f-41c9-8e1a-86dc0ebcb6c5,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766346691014061737,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-qfn7w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1432962d-567f-41c9-8e1a-86dc0ebcb6c5,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-21T19:51:30.697563944Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e4addf13-e601-42af-b23c-299b432082b7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.938738911Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:b8ca4d2cd5b7e65811f69a161ce13c537a5e4ca4e7948d0356f617f4146f81d8,Verbose:false,}" file="otel-collector/interceptors.go:62" id=56438b81-da3e-48fc-a1db-a43529a4c9d6 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.939781808Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:b8ca4d2cd5b7e65811f69a161ce13c537a5e4ca4e7948d0356f617f4146f81d8,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-qfn7w,Uid:1432962d-567f-41c9-8e1a-86dc0ebcb6c5,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1766346691014061737,Network:&PodSandboxNetworkStatus{Ip:10.244.0.33,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:&UserNamespace{Mode:NODE,Uids:[]*IDMapping{},Gids:[]*IDMapping{},},},},},Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-qfn7w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1432962d-567f-41c9-8e1a-86dc0ebcb6c5,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen:
2025-12-21T19:51:30.697563944Z,kubernetes.io/config.source: api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=56438b81-da3e-48fc-a1db-a43529a4c9d6 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.940374246Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 1432962d-567f-41c9-8e1a-86dc0ebcb6c5,},},}" file="otel-collector/interceptors.go:62" id=6aafdb4c-36c7-4b0a-b2ad-cdcba25b507d name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.940660192Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6aafdb4c-36c7-4b0a-b2ad-cdcba25b507d name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 19:51:31 addons-659513 crio[814]: time="2025-12-21 19:51:31.941885050Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6aafdb4c-36c7-4b0a-b2ad-cdcba25b507d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	24cbfb986d5c3       public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c                           2 minutes ago       Running             nginx                     0                   4a41ca16c86e0       nginx                                       default
	8ae51e8a03b57       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   7888b298e1b5a       busybox                                     default
	1d2ff9b28479e       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad             3 minutes ago       Running             controller                0                   d9ecfb70d267b       ingress-nginx-controller-85d4c799dd-s7ffl   ingress-nginx
	b0d4f54935266       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              patch                     0                   826c23ce1e036       ingress-nginx-admission-patch-xlmpc         ingress-nginx
	66cb039aacf2b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              create                    0                   d660c400d7e26       ingress-nginx-admission-create-5skzk        ingress-nginx
	d2554c908ec62       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago       Running             minikube-ingress-dns      0                   b74c477f14c08       kube-ingress-dns-minikube                   kube-system
	d71a3f6b64ed7       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   f10da7fce2098       amd-gpu-device-plugin-96g9f                 kube-system
	821adec837734       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   92681fb3c28b7       storage-provisioner                         kube-system
	aaf270f354b50       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   cd23883be1532       coredns-66bc5c9577-26xrr                    kube-system
	944524acd2e98       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                             4 minutes ago       Running             kube-proxy                0                   377a6c7a47a55       kube-proxy-fbvb9                            kube-system
	835c8c15bbf37       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             4 minutes ago       Running             etcd                      0                   f3478431553a6       etcd-addons-659513                          kube-system
	5546673aec525       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                             4 minutes ago       Running             kube-controller-manager   0                   7a9712131b66b       kube-controller-manager-addons-659513       kube-system
	51e3f1b192dcb       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                             4 minutes ago       Running             kube-apiserver            0                   dd2945bb4b694       kube-apiserver-addons-659513                kube-system
	70cbc562e70d0       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                             4 minutes ago       Running             kube-scheduler            0                   3a3934ff8e884       kube-scheduler-addons-659513                kube-system
	
	
	==> coredns [aaf270f354b50d2a160ee904351fae497bcb13ccd6a6225ad9d4d85ddc5a653f] <==
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	[INFO] Reloading complete
	[INFO] 127.0.0.1:57736 - 5910 "HINFO IN 8626009017774707841.5089829701906143058. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026395123s
	[INFO] 10.244.0.23:51803 - 45161 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000652122s
	[INFO] 10.244.0.23:50720 - 43576 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000652323s
	[INFO] 10.244.0.23:44305 - 16590 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000188968s
	[INFO] 10.244.0.23:55430 - 11072 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000119974s
	[INFO] 10.244.0.23:36131 - 61271 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000099275s
	[INFO] 10.244.0.23:43585 - 57886 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125965s
	[INFO] 10.244.0.23:39338 - 26550 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000980255s
	[INFO] 10.244.0.23:48366 - 54851 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.006955981s
	[INFO] 10.244.0.26:60660 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000342492s
	[INFO] 10.244.0.26:49797 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000301813s
	
	
	==> describe nodes <==
	Name:               addons-659513
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-659513
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=addons-659513
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T19_47_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-659513
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 19:46:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-659513
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 19:51:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 19:49:35 +0000   Sun, 21 Dec 2025 19:46:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 19:49:35 +0000   Sun, 21 Dec 2025 19:46:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 19:49:35 +0000   Sun, 21 Dec 2025 19:46:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Dec 2025 19:49:35 +0000   Sun, 21 Dec 2025 19:47:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.164
	  Hostname:    addons-659513
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 536fbf6298e14d4fbd81908693d32210
	  System UUID:                536fbf62-98e1-4d4f-bd81-908693d32210
	  Boot ID:                    06755251-81a9-43ca-b220-4e3471a1e4b0
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	  default                     hello-world-app-5d498dc89-qfn7w              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-s7ffl    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m17s
	  kube-system                 amd-gpu-device-plugin-96g9f                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 coredns-66bc5c9577-26xrr                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m25s
	  kube-system                 etcd-addons-659513                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m31s
	  kube-system                 kube-apiserver-addons-659513                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 kube-controller-manager-addons-659513        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-proxy-fbvb9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-scheduler-addons-659513                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m22s                  kube-proxy       
	  Normal  Starting                 4m37s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m37s (x8 over 4m37s)  kubelet          Node addons-659513 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m37s (x8 over 4m37s)  kubelet          Node addons-659513 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m37s (x7 over 4m37s)  kubelet          Node addons-659513 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m31s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m31s                  kubelet          Node addons-659513 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m31s                  kubelet          Node addons-659513 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m31s                  kubelet          Node addons-659513 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m30s                  kubelet          Node addons-659513 status is now: NodeReady
	  Normal  RegisteredNode           4m26s                  node-controller  Node addons-659513 event: Registered Node addons-659513 in Controller
	
	
	==> dmesg <==
	[  +0.235360] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.000190] kauditd_printk_skb: 318 callbacks suppressed
	[  +0.752884] kauditd_printk_skb: 302 callbacks suppressed
	[  +2.679842] kauditd_printk_skb: 395 callbacks suppressed
	[  +5.111331] kauditd_printk_skb: 20 callbacks suppressed
	[  +8.571017] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.031183] kauditd_printk_skb: 26 callbacks suppressed
	[  +8.026802] kauditd_printk_skb: 113 callbacks suppressed
	[  +1.032100] kauditd_printk_skb: 109 callbacks suppressed
	[Dec21 19:48] kauditd_printk_skb: 82 callbacks suppressed
	[  +4.328318] kauditd_printk_skb: 112 callbacks suppressed
	[  +0.000026] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.989593] kauditd_printk_skb: 50 callbacks suppressed
	[  +5.052907] kauditd_printk_skb: 47 callbacks suppressed
	[  +2.427301] kauditd_printk_skb: 32 callbacks suppressed
	[  +9.606133] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.036443] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.769446] kauditd_printk_skb: 59 callbacks suppressed
	[Dec21 19:49] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.824977] kauditd_printk_skb: 184 callbacks suppressed
	[  +4.079920] kauditd_printk_skb: 153 callbacks suppressed
	[  +8.332087] kauditd_printk_skb: 157 callbacks suppressed
	[  +0.000028] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.164027] kauditd_printk_skb: 61 callbacks suppressed
	[Dec21 19:51] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [835c8c15bbf37d26aca711aed08532cb1b32be70b119565fe2f14cdba5136552] <==
	{"level":"info","ts":"2025-12-21T19:47:50.950187Z","caller":"traceutil/trace.go:172","msg":"trace[1909800173] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1009; }","duration":"186.796996ms","start":"2025-12-21T19:47:50.763381Z","end":"2025-12-21T19:47:50.950178Z","steps":["trace[1909800173] 'agreement among raft nodes before linearized reading'  (duration: 185.309745ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T19:47:50.948275Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.624662ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T19:47:50.950302Z","caller":"traceutil/trace.go:172","msg":"trace[81494307] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1009; }","duration":"123.652116ms","start":"2025-12-21T19:47:50.826641Z","end":"2025-12-21T19:47:50.950293Z","steps":["trace[81494307] 'agreement among raft nodes before linearized reading'  (duration: 121.60336ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:47:52.995299Z","caller":"traceutil/trace.go:172","msg":"trace[454392188] linearizableReadLoop","detail":"{readStateIndex:1035; appliedIndex:1035; }","duration":"229.3465ms","start":"2025-12-21T19:47:52.765935Z","end":"2025-12-21T19:47:52.995282Z","steps":["trace[454392188] 'read index received'  (duration: 229.341257ms)","trace[454392188] 'applied index is now lower than readState.Index'  (duration: 3.998µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T19:47:52.995429Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"229.463876ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T19:47:52.995449Z","caller":"traceutil/trace.go:172","msg":"trace[901148205] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1012; }","duration":"229.513172ms","start":"2025-12-21T19:47:52.765931Z","end":"2025-12-21T19:47:52.995444Z","steps":["trace[901148205] 'agreement among raft nodes before linearized reading'  (duration: 229.439705ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:47:52.995460Z","caller":"traceutil/trace.go:172","msg":"trace[694933170] transaction","detail":"{read_only:false; response_revision:1013; number_of_response:1; }","duration":"244.701316ms","start":"2025-12-21T19:47:52.750746Z","end":"2025-12-21T19:47:52.995447Z","steps":["trace[694933170] 'process raft request'  (duration: 244.616547ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T19:47:52.995636Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"168.212298ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T19:47:52.995653Z","caller":"traceutil/trace.go:172","msg":"trace[648566919] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1013; }","duration":"168.232054ms","start":"2025-12-21T19:47:52.827416Z","end":"2025-12-21T19:47:52.995649Z","steps":["trace[648566919] 'agreement among raft nodes before linearized reading'  (duration: 168.193401ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:48:25.660716Z","caller":"traceutil/trace.go:172","msg":"trace[190185239] transaction","detail":"{read_only:false; response_revision:1171; number_of_response:1; }","duration":"252.757518ms","start":"2025-12-21T19:48:25.407933Z","end":"2025-12-21T19:48:25.660691Z","steps":["trace[190185239] 'process raft request'  (duration: 252.656644ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:48:27.327463Z","caller":"traceutil/trace.go:172","msg":"trace[1509538383] transaction","detail":"{read_only:false; response_revision:1173; number_of_response:1; }","duration":"198.407293ms","start":"2025-12-21T19:48:27.129044Z","end":"2025-12-21T19:48:27.327451Z","steps":["trace[1509538383] 'process raft request'  (duration: 198.312852ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:48:27.328232Z","caller":"traceutil/trace.go:172","msg":"trace[1565997657] linearizableReadLoop","detail":"{readStateIndex:1203; appliedIndex:1204; }","duration":"186.262719ms","start":"2025-12-21T19:48:27.141783Z","end":"2025-12-21T19:48:27.328046Z","steps":["trace[1565997657] 'read index received'  (duration: 186.255953ms)","trace[1565997657] 'applied index is now lower than readState.Index'  (duration: 5.648µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T19:48:27.328546Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"186.689782ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T19:48:27.328647Z","caller":"traceutil/trace.go:172","msg":"trace[215539254] range","detail":"{range_begin:/registry/replicasets; range_end:; response_count:0; response_revision:1173; }","duration":"186.801442ms","start":"2025-12-21T19:48:27.141779Z","end":"2025-12-21T19:48:27.328581Z","steps":["trace[215539254] 'agreement among raft nodes before linearized reading'  (duration: 186.670911ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T19:48:27.330027Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"149.528724ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T19:48:27.330082Z","caller":"traceutil/trace.go:172","msg":"trace[847561110] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1173; }","duration":"149.628219ms","start":"2025-12-21T19:48:27.180439Z","end":"2025-12-21T19:48:27.330067Z","steps":["trace[847561110] 'agreement among raft nodes before linearized reading'  (duration: 148.461363ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:48:56.468264Z","caller":"traceutil/trace.go:172","msg":"trace[203431017] linearizableReadLoop","detail":"{readStateIndex:1391; appliedIndex:1391; }","duration":"166.219821ms","start":"2025-12-21T19:48:56.301985Z","end":"2025-12-21T19:48:56.468205Z","steps":["trace[203431017] 'read index received'  (duration: 166.207539ms)","trace[203431017] 'applied index is now lower than readState.Index'  (duration: 8.328µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T19:48:56.470032Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"168.026643ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T19:48:56.470323Z","caller":"traceutil/trace.go:172","msg":"trace[468893441] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1354; }","duration":"168.330966ms","start":"2025-12-21T19:48:56.301980Z","end":"2025-12-21T19:48:56.470311Z","steps":["trace[468893441] 'agreement among raft nodes before linearized reading'  (duration: 166.400235ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:49:02.208944Z","caller":"traceutil/trace.go:172","msg":"trace[1291651771] linearizableReadLoop","detail":"{readStateIndex:1438; appliedIndex:1438; }","duration":"305.318422ms","start":"2025-12-21T19:49:01.903609Z","end":"2025-12-21T19:49:02.208927Z","steps":["trace[1291651771] 'read index received'  (duration: 305.313273ms)","trace[1291651771] 'applied index is now lower than readState.Index'  (duration: 4.382µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T19:49:02.209060Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"305.436304ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T19:49:02.209079Z","caller":"traceutil/trace.go:172","msg":"trace[789901411] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1396; }","duration":"305.466459ms","start":"2025-12-21T19:49:01.903606Z","end":"2025-12-21T19:49:02.209072Z","steps":["trace[789901411] 'agreement among raft nodes before linearized reading'  (duration: 305.407736ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T19:49:02.209103Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-21T19:49:01.903590Z","time spent":"305.50263ms","remote":"127.0.0.1:35478","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-12-21T19:49:02.209931Z","caller":"traceutil/trace.go:172","msg":"trace[1606590807] transaction","detail":"{read_only:false; response_revision:1397; number_of_response:1; }","duration":"344.388785ms","start":"2025-12-21T19:49:01.865533Z","end":"2025-12-21T19:49:02.209922Z","steps":["trace[1606590807] 'process raft request'  (duration: 343.887894ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T19:49:02.210454Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-21T19:49:01.865518Z","time spent":"344.862199ms","remote":"127.0.0.1:35830","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1395 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 19:51:32 up 5 min,  0 users,  load average: 0.62, 1.14, 0.59
	Linux addons-659513 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Dec 20 21:36:01 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [51e3f1b192dcb7acee686c577bf7a411a3d775b35627c76e70a7d5588ed42e81] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1221 19:47:48.673666       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.125.156:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.125.156:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.125.156:443: connect: connection refused" logger="UnhandledError"
	I1221 19:47:48.710859       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1221 19:47:48.743198       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1221 19:48:42.806457       1 conn.go:339] Error on socket receive: read tcp 192.168.39.164:8443->192.168.39.1:41020: use of closed network connection
	E1221 19:48:43.002569       1 conn.go:339] Error on socket receive: read tcp 192.168.39.164:8443->192.168.39.1:41058: use of closed network connection
	I1221 19:48:52.220073       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.196.75"}
	I1221 19:49:08.680928       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1221 19:49:08.936535       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.21.239"}
	I1221 19:49:24.077689       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1221 19:49:30.939397       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1221 19:49:49.688187       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1221 19:49:52.134678       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1221 19:49:52.134793       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1221 19:49:52.166833       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1221 19:49:52.166882       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1221 19:49:52.196769       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1221 19:49:52.197108       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1221 19:49:52.227673       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1221 19:49:52.227917       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1221 19:49:53.169835       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1221 19:49:53.227991       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1221 19:49:53.244287       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1221 19:51:30.787256       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.25.237"}
	
	
	==> kube-controller-manager [5546673aec525016ac3db18f88a4fc01cedc9678c9eb422c032127aa209ca951] <==
	E1221 19:50:02.010924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1221 19:50:02.716710       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1221 19:50:02.717786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1221 19:50:07.379357       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1221 19:50:07.379403       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1221 19:50:07.464387       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1221 19:50:07.464440       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1221 19:50:07.498260       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1221 19:50:07.500165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1221 19:50:13.370720       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1221 19:50:13.371779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1221 19:50:14.254564       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1221 19:50:14.255621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1221 19:50:30.507175       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1221 19:50:30.508338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1221 19:50:32.906192       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1221 19:50:32.907233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1221 19:50:37.036337       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1221 19:50:37.037334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1221 19:51:05.868816       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1221 19:51:05.869967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1221 19:51:11.777474       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1221 19:51:11.778508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1221 19:51:13.406661       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1221 19:51:13.407739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [944524acd2e98b5a8fbda9f53aa5af06093335f472b9c4739bf44311faf57c5f] <==
	I1221 19:47:08.800911       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1221 19:47:08.902022       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1221 19:47:08.903239       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.164"]
	E1221 19:47:08.907177       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 19:47:09.200019       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1221 19:47:09.200170       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1221 19:47:09.200205       1 server_linux.go:132] "Using iptables Proxier"
	I1221 19:47:09.313047       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 19:47:09.313375       1 server.go:527] "Version info" version="v1.34.3"
	I1221 19:47:09.313407       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 19:47:09.333398       1 config.go:200] "Starting service config controller"
	I1221 19:47:09.333430       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 19:47:09.333455       1 config.go:106] "Starting endpoint slice config controller"
	I1221 19:47:09.333459       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 19:47:09.333467       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 19:47:09.333471       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 19:47:09.335614       1 config.go:309] "Starting node config controller"
	I1221 19:47:09.336110       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 19:47:09.434004       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1221 19:47:09.434091       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1221 19:47:09.434166       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1221 19:47:09.437060       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [70cbc562e70d050f91338c415852cd26b7e7f1fdea65d9883e7b97d79508e7a6] <==
	E1221 19:46:59.321104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1221 19:46:59.322575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1221 19:46:59.321387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1221 19:46:59.322885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1221 19:46:59.322949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1221 19:46:59.323260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1221 19:46:59.323270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1221 19:46:59.323372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1221 19:46:59.323414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1221 19:46:59.323448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1221 19:46:59.323489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1221 19:47:00.149079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1221 19:47:00.158573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1221 19:47:00.187880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1221 19:47:00.259793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1221 19:47:00.269103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1221 19:47:00.300173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1221 19:47:00.346695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1221 19:47:00.378969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1221 19:47:00.478854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1221 19:47:00.479688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1221 19:47:00.500232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1221 19:47:00.544850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1221 19:47:00.580528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1221 19:47:02.404197       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 21 19:50:01 addons-659513 kubelet[1504]: E1221 19:50:01.993953    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766346601993582438  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551108}  inodes_used:{value:196}}"
	Dec 21 19:50:02 addons-659513 kubelet[1504]: I1221 19:50:02.936214    1504 scope.go:117] "RemoveContainer" containerID="e2ad0089f2b30d3fc3c0b40b208508e9d62daa0110ac9b3c4d232f45be2a0c23"
	Dec 21 19:50:03 addons-659513 kubelet[1504]: I1221 19:50:03.059744    1504 scope.go:117] "RemoveContainer" containerID="8d77481da0af0050129321a6ed21d1c2cb789c13cd476c83208983d9086e5c0f"
	Dec 21 19:50:03 addons-659513 kubelet[1504]: I1221 19:50:03.180268    1504 scope.go:117] "RemoveContainer" containerID="1f6aaf2c36d5ff744f0e3820d2eedd7f1f39eb88e5e0935dff55980a0b590697"
	Dec 21 19:50:11 addons-659513 kubelet[1504]: E1221 19:50:11.997519    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766346611997066221  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551108}  inodes_used:{value:196}}"
	Dec 21 19:50:11 addons-659513 kubelet[1504]: E1221 19:50:11.997540    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766346611997066221  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551108}  inodes_used:{value:196}}"
	Dec 21 19:50:22 addons-659513 kubelet[1504]: E1221 19:50:22.000481    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766346622000107996  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551108}  inodes_used:{value:196}}"
	Dec 21 19:50:22 addons-659513 kubelet[1504]: E1221 19:50:22.000505    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766346622000107996  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551108}  inodes_used:{value:196}}"
	Dec 21 19:50:32 addons-659513 kubelet[1504]: E1221 19:50:32.004057    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766346632003675120  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551108}  inodes_used:{value:196}}"
	Dec 21 19:50:32 addons-659513 kubelet[1504]: E1221 19:50:32.004101    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766346632003675120  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551108}  inodes_used:{value:196}}"
	Dec 21 19:50:42 addons-659513 kubelet[1504]: E1221 19:50:42.007092    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766346642006597719  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551108}  inodes_used:{value:196}}"
	Dec 21 19:50:42 addons-659513 kubelet[1504]: E1221 19:50:42.007183    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766346642006597719  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551108}  inodes_used:{value:196}}"
	Dec 21 19:50:52 addons-659513 kubelet[1504]: E1221 19:50:52.010524    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766346652010049684  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551108}  inodes_used:{value:196}}"
	Dec 21 19:50:52 addons-659513 kubelet[1504]: E1221 19:50:52.010566    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766346652010049684  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551108}  inodes_used:{value:196}}"
	Dec 21 19:50:59 addons-659513 kubelet[1504]: I1221 19:50:59.822897    1504 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-96g9f" secret="" err="secret \"gcp-auth\" not found"
	Dec 21 19:51:02 addons-659513 kubelet[1504]: E1221 19:51:02.014247    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766346662013673269  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551108}  inodes_used:{value:196}}"
	Dec 21 19:51:02 addons-659513 kubelet[1504]: E1221 19:51:02.014274    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766346662013673269  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551108}  inodes_used:{value:196}}"
	Dec 21 19:51:12 addons-659513 kubelet[1504]: E1221 19:51:12.017839    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766346672017348490  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551108}  inodes_used:{value:196}}"
	Dec 21 19:51:12 addons-659513 kubelet[1504]: E1221 19:51:12.017889    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766346672017348490  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551108}  inodes_used:{value:196}}"
	Dec 21 19:51:22 addons-659513 kubelet[1504]: E1221 19:51:22.020899    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766346682020545685  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551108}  inodes_used:{value:196}}"
	Dec 21 19:51:22 addons-659513 kubelet[1504]: E1221 19:51:22.020940    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766346682020545685  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551108}  inodes_used:{value:196}}"
	Dec 21 19:51:24 addons-659513 kubelet[1504]: I1221 19:51:24.823054    1504 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 21 19:51:30 addons-659513 kubelet[1504]: I1221 19:51:30.799906    1504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvf6w\" (UniqueName: \"kubernetes.io/projected/1432962d-567f-41c9-8e1a-86dc0ebcb6c5-kube-api-access-zvf6w\") pod \"hello-world-app-5d498dc89-qfn7w\" (UID: \"1432962d-567f-41c9-8e1a-86dc0ebcb6c5\") " pod="default/hello-world-app-5d498dc89-qfn7w"
	Dec 21 19:51:32 addons-659513 kubelet[1504]: E1221 19:51:32.030733    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766346692029005131  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:559714}  inodes_used:{value:201}}"
	Dec 21 19:51:32 addons-659513 kubelet[1504]: E1221 19:51:32.030755    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766346692029005131  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:559714}  inodes_used:{value:201}}"
	
	
	==> storage-provisioner [821adec83773446bd435ef05ab329e5d395b6617013fdb8fb83cfe0e620f4c54] <==
	W1221 19:51:06.914413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:51:08.917744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:51:08.922918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:51:10.926783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:51:10.932454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:51:12.936454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:51:12.941339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:51:14.945217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:51:14.953028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:51:16.955862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:51:16.961598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:51:18.966580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:51:18.974022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:51:20.977779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:51:20.983304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:51:22.988088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:51:22.993509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:51:24.997795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:51:25.003091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:51:27.006324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:51:27.014609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:51:29.018277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:51:29.023520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:51:31.040095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:51:31.058424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-659513 -n addons-659513
helpers_test.go:270: (dbg) Run:  kubectl --context addons-659513 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-5skzk ingress-nginx-admission-patch-xlmpc
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-659513 describe pod ingress-nginx-admission-create-5skzk ingress-nginx-admission-patch-xlmpc
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-659513 describe pod ingress-nginx-admission-create-5skzk ingress-nginx-admission-patch-xlmpc: exit status 1 (59.133073ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-5skzk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xlmpc" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-659513 describe pod ingress-nginx-admission-create-5skzk ingress-nginx-admission-patch-xlmpc: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-659513 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-659513 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-659513 addons disable ingress --alsologtostderr -v=1: (7.715410145s)
--- FAIL: TestAddons/parallel/Ingress (153.33s)

                                                
                                    
x
+
TestCertExpiration (1058.09s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-514100 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-514100 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (37.752010874s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-514100 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p cert-expiration-514100 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: exit status 80 (13m58.398135755s)

                                                
                                                
-- stdout --
	* [cert-expiration-514100] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "cert-expiration-514100" primary control-plane node in "cert-expiration-514100" cluster
	* Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Certificate client.crt has expired. Generating a new one...
	! Certificate apiserver.crt.fbf8f07f has expired. Generating a new one...
	! Certificate proxy-client.crt has expired. Generating a new one...
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001676898s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.50.159:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000506798s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000409089s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000646669s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.50.159:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.502149564s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.50.159:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000512994s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000573172s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000867716s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.50.159:8443/livez: Get "https://192.168.50.159:8443/livez?timeout=10s": dial tcp 192.168.50.159:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.502149564s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.50.159:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000512994s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000573172s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000867716s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.50.159:8443/livez: Get "https://192.168.50.159:8443/livez?timeout=10s": dial tcp 192.168.50.159:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-linux-amd64 start -p cert-expiration-514100 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio" : exit status 80
cert_options_test.go:138: *** TestCertExpiration FAILED at 2025-12-21 21:19:48.162188677 +0000 UTC m=+5616.999980541
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestCertExpiration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p cert-expiration-514100 -n cert-expiration-514100
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p cert-expiration-514100 -n cert-expiration-514100: exit status 2 (203.403336ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestCertExpiration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestCertExpiration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p cert-expiration-514100 logs -n 25
helpers_test.go:261: TestCertExpiration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────┬───────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                      │    PROFILE    │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────┼───────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-340687 sudo iptables -t nat -L -n -v                                 │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │ 21 Dec 25 21:16 UTC │
	│ ssh     │ -p bridge-340687 sudo systemctl status kubelet --all --full --no-pager         │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │ 21 Dec 25 21:16 UTC │
	│ ssh     │ -p bridge-340687 sudo systemctl cat kubelet --no-pager                         │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │ 21 Dec 25 21:16 UTC │
	│ ssh     │ -p bridge-340687 sudo journalctl -xeu kubelet --all --full --no-pager          │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │ 21 Dec 25 21:16 UTC │
	│ ssh     │ -p bridge-340687 sudo cat /etc/kubernetes/kubelet.conf                         │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │ 21 Dec 25 21:16 UTC │
	│ ssh     │ -p bridge-340687 sudo cat /var/lib/kubelet/config.yaml                         │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │ 21 Dec 25 21:16 UTC │
	│ ssh     │ -p bridge-340687 sudo systemctl status docker --all --full --no-pager          │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │                     │
	│ ssh     │ -p bridge-340687 sudo systemctl cat docker --no-pager                          │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │ 21 Dec 25 21:16 UTC │
	│ ssh     │ -p bridge-340687 sudo cat /etc/docker/daemon.json                              │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │ 21 Dec 25 21:16 UTC │
	│ ssh     │ -p bridge-340687 sudo docker system info                                       │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │                     │
	│ ssh     │ -p bridge-340687 sudo systemctl status cri-docker --all --full --no-pager      │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │                     │
	│ ssh     │ -p bridge-340687 sudo systemctl cat cri-docker --no-pager                      │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │ 21 Dec 25 21:16 UTC │
	│ ssh     │ -p bridge-340687 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │                     │
	│ ssh     │ -p bridge-340687 sudo cat /usr/lib/systemd/system/cri-docker.service           │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │ 21 Dec 25 21:16 UTC │
	│ ssh     │ -p bridge-340687 sudo cri-dockerd --version                                    │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │ 21 Dec 25 21:16 UTC │
	│ ssh     │ -p bridge-340687 sudo systemctl status containerd --all --full --no-pager      │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │                     │
	│ ssh     │ -p bridge-340687 sudo systemctl cat containerd --no-pager                      │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │ 21 Dec 25 21:16 UTC │
	│ ssh     │ -p bridge-340687 sudo cat /lib/systemd/system/containerd.service               │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │ 21 Dec 25 21:16 UTC │
	│ ssh     │ -p bridge-340687 sudo cat /etc/containerd/config.toml                          │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │ 21 Dec 25 21:16 UTC │
	│ ssh     │ -p bridge-340687 sudo containerd config dump                                   │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │ 21 Dec 25 21:16 UTC │
	│ ssh     │ -p bridge-340687 sudo systemctl status crio --all --full --no-pager            │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │ 21 Dec 25 21:16 UTC │
	│ ssh     │ -p bridge-340687 sudo systemctl cat crio --no-pager                            │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │ 21 Dec 25 21:16 UTC │
	│ ssh     │ -p bridge-340687 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │ 21 Dec 25 21:16 UTC │
	│ ssh     │ -p bridge-340687 sudo crio config                                              │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │ 21 Dec 25 21:16 UTC │
	│ delete  │ -p bridge-340687                                                               │ bridge-340687 │ jenkins │ v1.37.0 │ 21 Dec 25 21:16 UTC │ 21 Dec 25 21:16 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────┴───────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 21:15:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 21:15:02.711269  174120 out.go:360] Setting OutFile to fd 1 ...
	I1221 21:15:02.711642  174120 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 21:15:02.711658  174120 out.go:374] Setting ErrFile to fd 2...
	I1221 21:15:02.711665  174120 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 21:15:02.711897  174120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
	I1221 21:15:02.712437  174120 out.go:368] Setting JSON to false
	I1221 21:15:02.713581  174120 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":17853,"bootTime":1766333850,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 21:15:02.713643  174120 start.go:143] virtualization: kvm guest
	I1221 21:15:02.719570  174120 out.go:179] * [test-preload-dl-gcs-cached-451655] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 21:15:02.724100  174120 notify.go:221] Checking for updates...
	I1221 21:15:02.725169  174120 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 21:15:02.796227  174120 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 21:15:02.802747  174120 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	I1221 21:15:02.816671  174120 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	I1221 21:15:02.872726  174120 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 21:15:02.877618  174120 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 21:15:02.890285  174120 config.go:182] Loaded profile config "bridge-340687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 21:15:02.890392  174120 config.go:182] Loaded profile config "cert-expiration-514100": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 21:15:02.890510  174120 config.go:182] Loaded profile config "flannel-340687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 21:15:02.890584  174120 config.go:182] Loaded profile config "guest-667849": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1221 21:15:02.890711  174120 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 21:15:02.951023  174120 out.go:179] * Using the kvm2 driver based on user configuration
	I1221 21:15:02.958517  174120 start.go:309] selected driver: kvm2
	I1221 21:15:02.958545  174120 start.go:928] validating driver "kvm2" against <nil>
	I1221 21:15:02.958897  174120 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1221 21:15:02.959423  174120 start_flags.go:413] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1221 21:15:02.959596  174120 start_flags.go:977] Wait components to verify : map[apiserver:true system_pods:true]
	I1221 21:15:02.959635  174120 cni.go:84] Creating CNI manager for ""
	I1221 21:15:02.959687  174120 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1221 21:15:02.959696  174120 start_flags.go:338] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1221 21:15:02.959746  174120 start.go:353] cluster config:
	{Name:test-preload-dl-gcs-cached-451655 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-rc.2 ClusterName:test-preload-dl-gcs-cached-451655 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 21:15:02.959868  174120 iso.go:125] acquiring lock: {Name:mk32aed4917b82431a8f5160a35db6118385a2c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 21:15:02.987746  174120 out.go:179] * Starting "test-preload-dl-gcs-cached-451655" primary control-plane node in "test-preload-dl-gcs-cached-451655" cluster
	I1221 21:15:02.989682  174120 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime crio
	I1221 21:15:02.989725  174120 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-122429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1221 21:15:02.989771  174120 cache.go:65] Caching tarball of preloaded images
	I1221 21:15:02.989924  174120 preload.go:251] Found /home/jenkins/minikube-integration/22179-122429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 21:15:02.989938  174120 cache.go:68] Finished verifying existence of preloaded tar for v1.34.0-rc.2 on crio
	I1221 21:15:02.990167  174120 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/test-preload-dl-gcs-cached-451655/config.json ...
	I1221 21:15:02.990221  174120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/test-preload-dl-gcs-cached-451655/config.json: {Name:mk2a815efa6d0e40b699c332e3012afc74e4dfc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 21:15:02.990467  174120 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime crio
	I1221 21:15:02.990649  174120 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0-rc.2/bin/linux/amd64/kubectl.sha256
	I1221 21:15:02.999982  174120 out.go:179] * Download complete!
	I1221 21:15:01.761890  172870 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1221 21:15:01.779333  172870 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1221 21:15:01.806381  172870 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1221 21:15:01.806570  172870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 21:15:01.806654  172870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-340687 minikube.k8s.io/updated_at=2025_12_21T21_15_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c minikube.k8s.io/name=bridge-340687 minikube.k8s.io/primary=true
	I1221 21:15:01.860189  172870 ops.go:34] apiserver oom_adj: -16
	I1221 21:15:01.988853  172870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 21:15:02.489706  172870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 21:15:02.989630  172870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 21:15:03.489628  172870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 21:15:03.989216  172870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 21:15:04.489744  172870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 21:15:04.989154  172870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 21:15:05.489167  172870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 21:15:05.989817  172870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 21:15:06.087599  172870 kubeadm.go:1114] duration metric: took 4.281103014s to wait for elevateKubeSystemPrivileges
	I1221 21:15:06.087653  172870 kubeadm.go:403] duration metric: took 18.982237568s to StartCluster
	I1221 21:15:06.087680  172870 settings.go:142] acquiring lock: {Name:mk8bc901164ee13eb5278832ae429ca9408ea551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 21:15:06.087782  172870 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-122429/kubeconfig
	I1221 21:15:06.089020  172870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-122429/kubeconfig: {Name:mke0d928f8059efde48d6d18bc9cf0e4672401c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 21:15:06.089247  172870 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.72.141 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 21:15:06.089264  172870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1221 21:15:06.089304  172870 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1221 21:15:06.089405  172870 addons.go:70] Setting storage-provisioner=true in profile "bridge-340687"
	I1221 21:15:06.089428  172870 addons.go:70] Setting default-storageclass=true in profile "bridge-340687"
	I1221 21:15:06.089445  172870 addons.go:239] Setting addon storage-provisioner=true in "bridge-340687"
	I1221 21:15:06.089449  172870 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "bridge-340687"
	I1221 21:15:06.089464  172870 config.go:182] Loaded profile config "bridge-340687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 21:15:06.089476  172870 host.go:66] Checking if "bridge-340687" exists ...
	I1221 21:15:06.091832  172870 out.go:179] * Verifying Kubernetes components...
	I1221 21:15:06.093028  172870 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 21:15:06.093071  172870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 21:15:06.093430  172870 addons.go:239] Setting addon default-storageclass=true in "bridge-340687"
	I1221 21:15:06.093469  172870 host.go:66] Checking if "bridge-340687" exists ...
	I1221 21:15:06.095206  172870 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1221 21:15:06.095228  172870 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1221 21:15:06.097951  172870 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 21:15:06.097969  172870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1221 21:15:06.098074  172870 main.go:144] libmachine: domain bridge-340687 has defined MAC address 52:54:00:0e:b2:b5 in network mk-bridge-340687
	I1221 21:15:06.098736  172870 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0e:b2:b5", ip: ""} in network mk-bridge-340687: {Iface:virbr4 ExpiryTime:2025-12-21 22:14:38 +0000 UTC Type:0 Mac:52:54:00:0e:b2:b5 Iaid: IPaddr:192.168.72.141 Prefix:24 Hostname:bridge-340687 Clientid:01:52:54:00:0e:b2:b5}
	I1221 21:15:06.098775  172870 main.go:144] libmachine: domain bridge-340687 has defined IP address 192.168.72.141 and MAC address 52:54:00:0e:b2:b5 in network mk-bridge-340687
	I1221 21:15:06.099024  172870 sshutil.go:53] new ssh client: &{IP:192.168.72.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/bridge-340687/id_rsa Username:docker}
	I1221 21:15:06.100917  172870 main.go:144] libmachine: domain bridge-340687 has defined MAC address 52:54:00:0e:b2:b5 in network mk-bridge-340687
	I1221 21:15:06.101294  172870 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0e:b2:b5", ip: ""} in network mk-bridge-340687: {Iface:virbr4 ExpiryTime:2025-12-21 22:14:38 +0000 UTC Type:0 Mac:52:54:00:0e:b2:b5 Iaid: IPaddr:192.168.72.141 Prefix:24 Hostname:bridge-340687 Clientid:01:52:54:00:0e:b2:b5}
	I1221 21:15:06.101317  172870 main.go:144] libmachine: domain bridge-340687 has defined IP address 192.168.72.141 and MAC address 52:54:00:0e:b2:b5 in network mk-bridge-340687
	I1221 21:15:06.101478  172870 sshutil.go:53] new ssh client: &{IP:192.168.72.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/bridge-340687/id_rsa Username:docker}
	I1221 21:15:06.250686  172870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1221 21:15:06.345081  172870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 21:15:06.477010  172870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 21:15:06.629379  172870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1221 21:15:06.633375  172870 start.go:977] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1221 21:15:06.634773  172870 node_ready.go:35] waiting up to 15m0s for node "bridge-340687" to be "Ready" ...
	I1221 21:15:06.658749  172870 node_ready.go:49] node "bridge-340687" is "Ready"
	I1221 21:15:06.658783  172870 node_ready.go:38] duration metric: took 23.968767ms for node "bridge-340687" to be "Ready" ...
	I1221 21:15:06.658801  172870 api_server.go:52] waiting for apiserver process to appear ...
	I1221 21:15:06.658863  172870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 21:15:07.094592  172870 api_server.go:72] duration metric: took 1.005303314s to wait for apiserver process to appear ...
	I1221 21:15:07.094623  172870 api_server.go:88] waiting for apiserver healthz status ...
	I1221 21:15:07.094649  172870 api_server.go:253] Checking apiserver healthz at https://192.168.72.141:8443/healthz ...
	I1221 21:15:07.108693  172870 api_server.go:279] https://192.168.72.141:8443/healthz returned 200:
	ok
	I1221 21:15:07.115977  172870 api_server.go:141] control plane version: v1.34.3
	I1221 21:15:07.116011  172870 api_server.go:131] duration metric: took 21.3811ms to wait for apiserver health ...
	I1221 21:15:07.116022  172870 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 21:15:07.123638  172870 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1221 21:15:07.125029  172870 addons.go:530] duration metric: took 1.035718423s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1221 21:15:07.133116  172870 system_pods.go:59] 6 kube-system pods found
	I1221 21:15:07.133171  172870 system_pods.go:61] "etcd-bridge-340687" [5c8abbb9-ee69-4083-8f6c-0bc24c13fa22] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 21:15:07.133183  172870 system_pods.go:61] "kube-apiserver-bridge-340687" [d63b6124-d7e8-4d48-80d7-ab4ef6e20528] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 21:15:07.133202  172870 system_pods.go:61] "kube-controller-manager-bridge-340687" [545fc508-6b02-4ee6-a659-c2b4d0259b08] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 21:15:07.133231  172870 system_pods.go:61] "kube-proxy-pt9xg" [99c47e65-8e45-48de-a632-3c997d8a18aa] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1221 21:15:07.133240  172870 system_pods.go:61] "kube-scheduler-bridge-340687" [bd7c1323-151f-4a0c-9633-46fab1e6d5aa] Running
	I1221 21:15:07.133247  172870 system_pods.go:61] "storage-provisioner" [23a5d7fd-f11a-4f23-b5d1-137b91bd9ca3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 21:15:07.133261  172870 system_pods.go:74] duration metric: took 17.230587ms to wait for pod list to return data ...
	I1221 21:15:07.133277  172870 default_sa.go:34] waiting for default service account to be created ...
	I1221 21:15:07.140543  172870 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-340687" context rescaled to 1 replicas
	I1221 21:15:07.143660  172870 default_sa.go:45] found service account: "default"
	I1221 21:15:07.143687  172870 default_sa.go:55] duration metric: took 10.397689ms for default service account to be created ...
	I1221 21:15:07.143700  172870 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 21:15:07.161315  172870 system_pods.go:86] 7 kube-system pods found
	I1221 21:15:07.161347  172870 system_pods.go:89] "coredns-66bc5c9577-hxz29" [052d2dba-d34c-41d7-9a60-cba80af3681d] Pending
	I1221 21:15:07.161355  172870 system_pods.go:89] "etcd-bridge-340687" [5c8abbb9-ee69-4083-8f6c-0bc24c13fa22] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 21:15:07.161361  172870 system_pods.go:89] "kube-apiserver-bridge-340687" [d63b6124-d7e8-4d48-80d7-ab4ef6e20528] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 21:15:07.161369  172870 system_pods.go:89] "kube-controller-manager-bridge-340687" [545fc508-6b02-4ee6-a659-c2b4d0259b08] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 21:15:07.161374  172870 system_pods.go:89] "kube-proxy-pt9xg" [99c47e65-8e45-48de-a632-3c997d8a18aa] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1221 21:15:07.161378  172870 system_pods.go:89] "kube-scheduler-bridge-340687" [bd7c1323-151f-4a0c-9633-46fab1e6d5aa] Running
	I1221 21:15:07.161388  172870 system_pods.go:89] "storage-provisioner" [23a5d7fd-f11a-4f23-b5d1-137b91bd9ca3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 21:15:07.161430  172870 retry.go:84] will retry after 300ms: missing components: kube-dns, kube-proxy
	I1221 21:15:07.455007  172870 system_pods.go:86] 8 kube-system pods found
	I1221 21:15:07.455058  172870 system_pods.go:89] "coredns-66bc5c9577-hxz29" [052d2dba-d34c-41d7-9a60-cba80af3681d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 21:15:07.455069  172870 system_pods.go:89] "coredns-66bc5c9577-kcwvp" [570594e4-9ebd-4338-ae2d-c6b0ce022f75] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 21:15:07.455081  172870 system_pods.go:89] "etcd-bridge-340687" [5c8abbb9-ee69-4083-8f6c-0bc24c13fa22] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 21:15:07.455090  172870 system_pods.go:89] "kube-apiserver-bridge-340687" [d63b6124-d7e8-4d48-80d7-ab4ef6e20528] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 21:15:07.455100  172870 system_pods.go:89] "kube-controller-manager-bridge-340687" [545fc508-6b02-4ee6-a659-c2b4d0259b08] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 21:15:07.455113  172870 system_pods.go:89] "kube-proxy-pt9xg" [99c47e65-8e45-48de-a632-3c997d8a18aa] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1221 21:15:07.455119  172870 system_pods.go:89] "kube-scheduler-bridge-340687" [bd7c1323-151f-4a0c-9633-46fab1e6d5aa] Running
	I1221 21:15:07.455133  172870 system_pods.go:89] "storage-provisioner" [23a5d7fd-f11a-4f23-b5d1-137b91bd9ca3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 21:15:07.803472  172870 system_pods.go:86] 8 kube-system pods found
	I1221 21:15:07.803565  172870 system_pods.go:89] "coredns-66bc5c9577-hxz29" [052d2dba-d34c-41d7-9a60-cba80af3681d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 21:15:07.803592  172870 system_pods.go:89] "coredns-66bc5c9577-kcwvp" [570594e4-9ebd-4338-ae2d-c6b0ce022f75] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 21:15:07.803609  172870 system_pods.go:89] "etcd-bridge-340687" [5c8abbb9-ee69-4083-8f6c-0bc24c13fa22] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 21:15:07.803622  172870 system_pods.go:89] "kube-apiserver-bridge-340687" [d63b6124-d7e8-4d48-80d7-ab4ef6e20528] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 21:15:07.803638  172870 system_pods.go:89] "kube-controller-manager-bridge-340687" [545fc508-6b02-4ee6-a659-c2b4d0259b08] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 21:15:07.803650  172870 system_pods.go:89] "kube-proxy-pt9xg" [99c47e65-8e45-48de-a632-3c997d8a18aa] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1221 21:15:07.803660  172870 system_pods.go:89] "kube-scheduler-bridge-340687" [bd7c1323-151f-4a0c-9633-46fab1e6d5aa] Running
	I1221 21:15:07.803671  172870 system_pods.go:89] "storage-provisioner" [23a5d7fd-f11a-4f23-b5d1-137b91bd9ca3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 21:15:08.198765  172870 system_pods.go:86] 8 kube-system pods found
	I1221 21:15:08.198806  172870 system_pods.go:89] "coredns-66bc5c9577-hxz29" [052d2dba-d34c-41d7-9a60-cba80af3681d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 21:15:08.198819  172870 system_pods.go:89] "coredns-66bc5c9577-kcwvp" [570594e4-9ebd-4338-ae2d-c6b0ce022f75] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 21:15:08.198825  172870 system_pods.go:89] "etcd-bridge-340687" [5c8abbb9-ee69-4083-8f6c-0bc24c13fa22] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 21:15:08.198832  172870 system_pods.go:89] "kube-apiserver-bridge-340687" [d63b6124-d7e8-4d48-80d7-ab4ef6e20528] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 21:15:08.198838  172870 system_pods.go:89] "kube-controller-manager-bridge-340687" [545fc508-6b02-4ee6-a659-c2b4d0259b08] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 21:15:08.198843  172870 system_pods.go:89] "kube-proxy-pt9xg" [99c47e65-8e45-48de-a632-3c997d8a18aa] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1221 21:15:08.198848  172870 system_pods.go:89] "kube-scheduler-bridge-340687" [bd7c1323-151f-4a0c-9633-46fab1e6d5aa] Running
	I1221 21:15:08.198853  172870 system_pods.go:89] "storage-provisioner" [23a5d7fd-f11a-4f23-b5d1-137b91bd9ca3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 21:15:08.763789  172870 system_pods.go:86] 8 kube-system pods found
	I1221 21:15:08.763825  172870 system_pods.go:89] "coredns-66bc5c9577-hxz29" [052d2dba-d34c-41d7-9a60-cba80af3681d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 21:15:08.763834  172870 system_pods.go:89] "coredns-66bc5c9577-kcwvp" [570594e4-9ebd-4338-ae2d-c6b0ce022f75] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 21:15:08.763839  172870 system_pods.go:89] "etcd-bridge-340687" [5c8abbb9-ee69-4083-8f6c-0bc24c13fa22] Running
	I1221 21:15:08.763843  172870 system_pods.go:89] "kube-apiserver-bridge-340687" [d63b6124-d7e8-4d48-80d7-ab4ef6e20528] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 21:15:08.763852  172870 system_pods.go:89] "kube-controller-manager-bridge-340687" [545fc508-6b02-4ee6-a659-c2b4d0259b08] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 21:15:08.763858  172870 system_pods.go:89] "kube-proxy-pt9xg" [99c47e65-8e45-48de-a632-3c997d8a18aa] Running
	I1221 21:15:08.763862  172870 system_pods.go:89] "kube-scheduler-bridge-340687" [bd7c1323-151f-4a0c-9633-46fab1e6d5aa] Running
	I1221 21:15:08.763864  172870 system_pods.go:89] "storage-provisioner" [23a5d7fd-f11a-4f23-b5d1-137b91bd9ca3] Running
	I1221 21:15:08.763873  172870 system_pods.go:126] duration metric: took 1.620165214s to wait for k8s-apps to be running ...
	I1221 21:15:08.763879  172870 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 21:15:08.763926  172870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 21:15:08.781878  172870 system_svc.go:56] duration metric: took 17.986006ms WaitForService to wait for kubelet
	I1221 21:15:08.781931  172870 kubeadm.go:587] duration metric: took 2.692649595s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 21:15:08.781963  172870 node_conditions.go:102] verifying NodePressure condition ...
	I1221 21:15:08.786479  172870 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1221 21:15:08.786528  172870 node_conditions.go:123] node cpu capacity is 2
	I1221 21:15:08.786547  172870 node_conditions.go:105] duration metric: took 4.578364ms to run NodePressure ...
	I1221 21:15:08.786565  172870 start.go:242] waiting for startup goroutines ...
	I1221 21:15:08.786576  172870 start.go:247] waiting for cluster config update ...
	I1221 21:15:08.786589  172870 start.go:256] writing updated cluster config ...
	I1221 21:15:08.786867  172870 ssh_runner.go:195] Run: rm -f paused
	I1221 21:15:08.792273  172870 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 21:15:08.796923  172870 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hxz29" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:15:09.800517  172870 pod_ready.go:99] pod "coredns-66bc5c9577-hxz29" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-hxz29" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-hxz29" not found
	I1221 21:15:09.800541  172870 pod_ready.go:86] duration metric: took 1.003597226s for pod "coredns-66bc5c9577-hxz29" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:15:09.800552  172870 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kcwvp" in "kube-system" namespace to be "Ready" or be gone ...
	W1221 21:15:11.807782  172870 pod_ready.go:104] pod "coredns-66bc5c9577-kcwvp" is not "Ready", error: <nil>
	W1221 21:15:14.307160  172870 pod_ready.go:104] pod "coredns-66bc5c9577-kcwvp" is not "Ready", error: <nil>
	W1221 21:15:16.307868  172870 pod_ready.go:104] pod "coredns-66bc5c9577-kcwvp" is not "Ready", error: <nil>
	W1221 21:15:18.806942  172870 pod_ready.go:104] pod "coredns-66bc5c9577-kcwvp" is not "Ready", error: <nil>
	W1221 21:15:20.807478  172870 pod_ready.go:104] pod "coredns-66bc5c9577-kcwvp" is not "Ready", error: <nil>
	W1221 21:15:23.309587  172870 pod_ready.go:104] pod "coredns-66bc5c9577-kcwvp" is not "Ready", error: <nil>
	W1221 21:15:25.807410  172870 pod_ready.go:104] pod "coredns-66bc5c9577-kcwvp" is not "Ready", error: <nil>
	W1221 21:15:28.308820  172870 pod_ready.go:104] pod "coredns-66bc5c9577-kcwvp" is not "Ready", error: <nil>
	W1221 21:15:30.807460  172870 pod_ready.go:104] pod "coredns-66bc5c9577-kcwvp" is not "Ready", error: <nil>
	W1221 21:15:33.306533  172870 pod_ready.go:104] pod "coredns-66bc5c9577-kcwvp" is not "Ready", error: <nil>
	W1221 21:15:35.307233  172870 pod_ready.go:104] pod "coredns-66bc5c9577-kcwvp" is not "Ready", error: <nil>
	W1221 21:15:37.307413  172870 pod_ready.go:104] pod "coredns-66bc5c9577-kcwvp" is not "Ready", error: <nil>
	W1221 21:15:39.807317  172870 pod_ready.go:104] pod "coredns-66bc5c9577-kcwvp" is not "Ready", error: <nil>
	I1221 21:15:43.513240  164091 kubeadm.go:319] [control-plane-check] kube-scheduler is not healthy after 4m0.000506798s
	I1221 21:15:43.513400  164091 kubeadm.go:319] [control-plane-check] kube-controller-manager is not healthy after 4m0.000409089s
	I1221 21:15:43.513480  164091 kubeadm.go:319] [control-plane-check] kube-apiserver is not healthy after 4m0.000646669s
	I1221 21:15:43.513541  164091 kubeadm.go:319] 
	I1221 21:15:43.513664  164091 kubeadm.go:319] A control plane component may have crashed or exited when started by the container runtime.
	I1221 21:15:43.513781  164091 kubeadm.go:319] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1221 21:15:43.513855  164091 kubeadm.go:319] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1221 21:15:43.513984  164091 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1221 21:15:43.514083  164091 kubeadm.go:319] 	Once you have found the failing container, you can inspect its logs with:
	I1221 21:15:43.514198  164091 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1221 21:15:43.514203  164091 kubeadm.go:319] 
	I1221 21:15:43.517018  164091 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1221 21:15:43.517619  164091 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.50.159:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	I1221 21:15:43.517673  164091 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1221 21:15:43.517804  164091 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001676898s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.50.159:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000506798s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000409089s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000646669s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.50.159:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1221 21:15:43.517904  164091 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1221 21:15:44.685836  164091 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.167908133s)
	I1221 21:15:44.685904  164091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 21:15:44.704383  164091 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1221 21:15:44.716338  164091 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1221 21:15:44.716346  164091 kubeadm.go:158] found existing configuration files:
	
	I1221 21:15:44.716390  164091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1221 21:15:44.726914  164091 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1221 21:15:44.726962  164091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1221 21:15:44.738224  164091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1221 21:15:44.748752  164091 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1221 21:15:44.748802  164091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1221 21:15:44.760395  164091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1221 21:15:44.771204  164091 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1221 21:15:44.771262  164091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1221 21:15:44.782934  164091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1221 21:15:44.793750  164091 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1221 21:15:44.793794  164091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1221 21:15:44.805891  164091 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	W1221 21:15:41.807396  172870 pod_ready.go:104] pod "coredns-66bc5c9577-kcwvp" is not "Ready", error: <nil>
	W1221 21:15:44.307532  172870 pod_ready.go:104] pod "coredns-66bc5c9577-kcwvp" is not "Ready", error: <nil>
	I1221 21:15:45.307801  172870 pod_ready.go:94] pod "coredns-66bc5c9577-kcwvp" is "Ready"
	I1221 21:15:45.307831  172870 pod_ready.go:86] duration metric: took 35.507272259s for pod "coredns-66bc5c9577-kcwvp" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:15:45.310412  172870 pod_ready.go:83] waiting for pod "etcd-bridge-340687" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:15:45.316942  172870 pod_ready.go:94] pod "etcd-bridge-340687" is "Ready"
	I1221 21:15:45.316965  172870 pod_ready.go:86] duration metric: took 6.530107ms for pod "etcd-bridge-340687" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:15:45.319318  172870 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-340687" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:15:45.324965  172870 pod_ready.go:94] pod "kube-apiserver-bridge-340687" is "Ready"
	I1221 21:15:45.324987  172870 pod_ready.go:86] duration metric: took 5.647109ms for pod "kube-apiserver-bridge-340687" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:15:45.328386  172870 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-340687" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:15:45.504661  172870 pod_ready.go:94] pod "kube-controller-manager-bridge-340687" is "Ready"
	I1221 21:15:45.504696  172870 pod_ready.go:86] duration metric: took 176.289139ms for pod "kube-controller-manager-bridge-340687" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:15:45.704995  172870 pod_ready.go:83] waiting for pod "kube-proxy-pt9xg" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:15:46.104068  172870 pod_ready.go:94] pod "kube-proxy-pt9xg" is "Ready"
	I1221 21:15:46.104096  172870 pod_ready.go:86] duration metric: took 399.074193ms for pod "kube-proxy-pt9xg" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:15:46.305565  172870 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-340687" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:15:46.704982  172870 pod_ready.go:94] pod "kube-scheduler-bridge-340687" is "Ready"
	I1221 21:15:46.705010  172870 pod_ready.go:86] duration metric: took 399.41984ms for pod "kube-scheduler-bridge-340687" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:15:46.705021  172870 pod_ready.go:40] duration metric: took 37.912715377s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 21:15:46.751373  172870 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1221 21:15:46.753001  172870 out.go:179] * Done! kubectl is now configured to use "bridge-340687" cluster and "default" namespace by default
	I1221 21:15:44.957873  164091 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1221 21:19:47.398639  164091 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.50.159:8443/livez: Get "https://192.168.50.159:8443/livez?timeout=10s": dial tcp 192.168.50.159:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1221 21:19:47.398736  164091 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1221 21:19:47.401025  164091 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1221 21:19:47.401087  164091 kubeadm.go:319] [preflight] Running pre-flight checks
	I1221 21:19:47.401163  164091 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1221 21:19:47.401274  164091 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1221 21:19:47.401350  164091 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1221 21:19:47.401427  164091 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1221 21:19:47.406618  164091 out.go:252]   - Generating certificates and keys ...
	I1221 21:19:47.406687  164091 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1221 21:19:47.406734  164091 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1221 21:19:47.406800  164091 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1221 21:19:47.406845  164091 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1221 21:19:47.406912  164091 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1221 21:19:47.406961  164091 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1221 21:19:47.407009  164091 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1221 21:19:47.407063  164091 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1221 21:19:47.407143  164091 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1221 21:19:47.407249  164091 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1221 21:19:47.407278  164091 kubeadm.go:319] [certs] Using the existing "sa" key
	I1221 21:19:47.407322  164091 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1221 21:19:47.407360  164091 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1221 21:19:47.407401  164091 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1221 21:19:47.407445  164091 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1221 21:19:47.407516  164091 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1221 21:19:47.407562  164091 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1221 21:19:47.407642  164091 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1221 21:19:47.407714  164091 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1221 21:19:47.409076  164091 out.go:252]   - Booting up control plane ...
	I1221 21:19:47.409187  164091 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1221 21:19:47.409311  164091 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1221 21:19:47.409374  164091 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1221 21:19:47.409466  164091 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1221 21:19:47.409580  164091 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1221 21:19:47.409670  164091 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1221 21:19:47.409746  164091 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1221 21:19:47.409794  164091 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1221 21:19:47.409924  164091 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1221 21:19:47.410053  164091 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1221 21:19:47.410102  164091 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.502149564s
	I1221 21:19:47.410177  164091 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1221 21:19:47.410263  164091 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.50.159:8443/livez
	I1221 21:19:47.410338  164091 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1221 21:19:47.410404  164091 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1221 21:19:47.410468  164091 kubeadm.go:319] [control-plane-check] kube-apiserver is not healthy after 4m0.000512994s
	I1221 21:19:47.410553  164091 kubeadm.go:319] [control-plane-check] kube-controller-manager is not healthy after 4m0.000573172s
	I1221 21:19:47.410616  164091 kubeadm.go:319] [control-plane-check] kube-scheduler is not healthy after 4m0.000867716s
	I1221 21:19:47.410620  164091 kubeadm.go:319] 
	I1221 21:19:47.410689  164091 kubeadm.go:319] A control plane component may have crashed or exited when started by the container runtime.
	I1221 21:19:47.410753  164091 kubeadm.go:319] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1221 21:19:47.410838  164091 kubeadm.go:319] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1221 21:19:47.410973  164091 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1221 21:19:47.411039  164091 kubeadm.go:319] 	Once you have found the failing container, you can inspect its logs with:
	I1221 21:19:47.411125  164091 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1221 21:19:47.411148  164091 kubeadm.go:319] 
	I1221 21:19:47.411210  164091 kubeadm.go:403] duration metric: took 12m16.012772483s to StartCluster
	I1221 21:19:47.411269  164091 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1221 21:19:47.411322  164091 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1221 21:19:47.450265  164091 cri.go:96] found id: "564b74efa6ed642b973781255c5d398e6666e8b401f9e6781bca2d30d7bed1b1"
	I1221 21:19:47.450281  164091 cri.go:96] found id: ""
	I1221 21:19:47.450289  164091 logs.go:282] 1 containers: [564b74efa6ed642b973781255c5d398e6666e8b401f9e6781bca2d30d7bed1b1]
	I1221 21:19:47.450347  164091 ssh_runner.go:195] Run: which crictl
	I1221 21:19:47.455298  164091 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1221 21:19:47.455368  164091 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1221 21:19:47.487519  164091 cri.go:96] found id: ""
	I1221 21:19:47.487537  164091 logs.go:282] 0 containers: []
	W1221 21:19:47.487546  164091 logs.go:284] No container was found matching "etcd"
	I1221 21:19:47.487552  164091 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1221 21:19:47.487608  164091 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1221 21:19:47.518425  164091 cri.go:96] found id: ""
	I1221 21:19:47.518444  164091 logs.go:282] 0 containers: []
	W1221 21:19:47.518451  164091 logs.go:284] No container was found matching "coredns"
	I1221 21:19:47.518456  164091 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1221 21:19:47.518531  164091 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1221 21:19:47.552722  164091 cri.go:96] found id: ""
	I1221 21:19:47.552742  164091 logs.go:282] 0 containers: []
	W1221 21:19:47.552749  164091 logs.go:284] No container was found matching "kube-scheduler"
	I1221 21:19:47.552754  164091 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1221 21:19:47.552806  164091 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1221 21:19:47.585057  164091 cri.go:96] found id: ""
	I1221 21:19:47.585078  164091 logs.go:282] 0 containers: []
	W1221 21:19:47.585086  164091 logs.go:284] No container was found matching "kube-proxy"
	I1221 21:19:47.585091  164091 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1221 21:19:47.585145  164091 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1221 21:19:47.621963  164091 cri.go:96] found id: ""
	I1221 21:19:47.621980  164091 logs.go:282] 0 containers: []
	W1221 21:19:47.621987  164091 logs.go:284] No container was found matching "kube-controller-manager"
	I1221 21:19:47.621992  164091 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1221 21:19:47.622045  164091 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1221 21:19:47.654839  164091 cri.go:96] found id: ""
	I1221 21:19:47.654857  164091 logs.go:282] 0 containers: []
	W1221 21:19:47.654864  164091 logs.go:284] No container was found matching "kindnet"
	I1221 21:19:47.654869  164091 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1221 21:19:47.654923  164091 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1221 21:19:47.687368  164091 cri.go:96] found id: ""
	I1221 21:19:47.687385  164091 logs.go:282] 0 containers: []
	W1221 21:19:47.687392  164091 logs.go:284] No container was found matching "storage-provisioner"
	I1221 21:19:47.687402  164091 logs.go:123] Gathering logs for describe nodes ...
	I1221 21:19:47.687414  164091 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1221 21:19:47.760034  164091 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1221 21:19:47.760044  164091 logs.go:123] Gathering logs for kube-apiserver [564b74efa6ed642b973781255c5d398e6666e8b401f9e6781bca2d30d7bed1b1] ...
	I1221 21:19:47.760057  164091 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 564b74efa6ed642b973781255c5d398e6666e8b401f9e6781bca2d30d7bed1b1"
	I1221 21:19:47.794739  164091 logs.go:123] Gathering logs for CRI-O ...
	I1221 21:19:47.794760  164091 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1221 21:19:47.973974  164091 logs.go:123] Gathering logs for container status ...
	I1221 21:19:47.974000  164091 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1221 21:19:48.011573  164091 logs.go:123] Gathering logs for kubelet ...
	I1221 21:19:48.011593  164091 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1221 21:19:48.127038  164091 logs.go:123] Gathering logs for dmesg ...
	I1221 21:19:48.127062  164091 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1221 21:19:48.143698  164091 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.502149564s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.50.159:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000512994s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000573172s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000867716s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.50.159:8443/livez: Get "https://192.168.50.159:8443/livez?timeout=10s": dial tcp 192.168.50.159:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1221 21:19:48.143753  164091 out.go:285] * 
	W1221 21:19:48.143840  164091 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.502149564s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.50.159:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000512994s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000573172s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000867716s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.50.159:8443/livez: Get "https://192.168.50.159:8443/livez?timeout=10s": dial tcp 192.168.50.159:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1221 21:19:48.143866  164091 out.go:285] * 
	W1221 21:19:48.145644  164091 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 21:19:48.149021  164091 out.go:203] 
	W1221 21:19:48.150472  164091 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.502149564s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.50.159:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000512994s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000573172s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000867716s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.50.159:8443/livez: Get "https://192.168.50.159:8443/livez?timeout=10s": dial tcp 192.168.50.159:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1221 21:19:48.150510  164091 out.go:285] * 
	I1221 21:19:48.152050  164091 out.go:203] 
	
	
	==> CRI-O <==
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.690768166Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766351988690747442,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4ef38aa4-6bac-4010-9763-4c60fc068485 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.691945689Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e33956a1-c6c5-4f43-a6b8-4241fe8f6fe3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.692195210Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e33956a1-c6c5-4f43-a6b8-4241fe8f6fe3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.692544448Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:564b74efa6ed642b973781255c5d398e6666e8b401f9e6781bca2d30d7bed1b1,PodSandboxId:8a37d1e8a280957bbffde51b9672ebce53a6f608d739e7ff0110424f4584776b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766351925147569117,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-cert-expiration-514100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0d7b6604933f25e744a4a744d672ea,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"prob
e-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e33956a1-c6c5-4f43-a6b8-4241fe8f6fe3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.722669753Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba52df25-d405-4c58-9247-859c47a5b771 name=/runtime.v1.RuntimeService/Version
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.722762344Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba52df25-d405-4c58-9247-859c47a5b771 name=/runtime.v1.RuntimeService/Version
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.724748672Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2cdbcf55-7055-41b2-abaf-7d9ef2488dc0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.725142266Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766351988725120439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2cdbcf55-7055-41b2-abaf-7d9ef2488dc0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.726159851Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6b6b05a-b0ba-4496-9f80-b0d66d723002 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.726212736Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6b6b05a-b0ba-4496-9f80-b0d66d723002 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.726276347Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:564b74efa6ed642b973781255c5d398e6666e8b401f9e6781bca2d30d7bed1b1,PodSandboxId:8a37d1e8a280957bbffde51b9672ebce53a6f608d739e7ff0110424f4584776b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766351925147569117,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-cert-expiration-514100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0d7b6604933f25e744a4a744d672ea,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"prob
e-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e6b6b05a-b0ba-4496-9f80-b0d66d723002 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.756036549Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=920f9122-4133-44dc-9b43-9956ec804732 name=/runtime.v1.RuntimeService/Version
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.756468544Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=920f9122-4133-44dc-9b43-9956ec804732 name=/runtime.v1.RuntimeService/Version
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.758157449Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5377b762-82d4-4678-880b-7ebc39a4d864 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.758619213Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766351988758576613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5377b762-82d4-4678-880b-7ebc39a4d864 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.759599193Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=470b9476-5f81-4bdc-8415-804861a7eaaa name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.759763459Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=470b9476-5f81-4bdc-8415-804861a7eaaa name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.759831607Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:564b74efa6ed642b973781255c5d398e6666e8b401f9e6781bca2d30d7bed1b1,PodSandboxId:8a37d1e8a280957bbffde51b9672ebce53a6f608d739e7ff0110424f4584776b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766351925147569117,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-cert-expiration-514100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0d7b6604933f25e744a4a744d672ea,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"prob
e-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=470b9476-5f81-4bdc-8415-804861a7eaaa name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.789012444Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8d43d1a7-122d-4a90-b7d9-33bd03ac20b9 name=/runtime.v1.RuntimeService/Version
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.789107766Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d43d1a7-122d-4a90-b7d9-33bd03ac20b9 name=/runtime.v1.RuntimeService/Version
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.790558691Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ffe2cbbc-7b19-4ef2-bfea-8affab8be233 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.790976559Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766351988790951907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ffe2cbbc-7b19-4ef2-bfea-8affab8be233 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.791987122Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d65bf1f-2435-4c05-a8a2-6b67c7b3f8f8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.792198738Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d65bf1f-2435-4c05-a8a2-6b67c7b3f8f8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:19:48 cert-expiration-514100 crio[3330]: time="2025-12-21 21:19:48.792468003Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:564b74efa6ed642b973781255c5d398e6666e8b401f9e6781bca2d30d7bed1b1,PodSandboxId:8a37d1e8a280957bbffde51b9672ebce53a6f608d739e7ff0110424f4584776b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766351925147569117,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-cert-expiration-514100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0d7b6604933f25e744a4a744d672ea,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"prob
e-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d65bf1f-2435-4c05-a8a2-6b67c7b3f8f8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                ATTEMPT             POD ID              POD                                     NAMESPACE
	564b74efa6ed6       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c   About a minute ago   Exited              kube-apiserver      15                  8a37d1e8a2809       kube-apiserver-cert-expiration-514100   kube-system
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +1.172293] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.089982] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.110966] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.147049] kauditd_printk_skb: 171 callbacks suppressed
	[  +4.352717] kauditd_printk_skb: 63 callbacks suppressed
	[Dec21 21:03] kauditd_printk_skb: 147 callbacks suppressed
	[Dec21 21:05] kauditd_printk_skb: 11 callbacks suppressed
	[Dec21 21:07] kauditd_printk_skb: 316 callbacks suppressed
	[ +22.691519] kauditd_printk_skb: 137 callbacks suppressed
	[Dec21 21:08] kauditd_printk_skb: 5 callbacks suppressed
	[Dec21 21:09] kauditd_printk_skb: 5 callbacks suppressed
	[Dec21 21:10] kauditd_printk_skb: 5 callbacks suppressed
	[Dec21 21:11] kauditd_printk_skb: 48 callbacks suppressed
	[Dec21 21:12] kauditd_printk_skb: 65 callbacks suppressed
	[ +21.162505] kauditd_printk_skb: 5 callbacks suppressed
	[Dec21 21:13] kauditd_printk_skb: 5 callbacks suppressed
	[ +21.560303] kauditd_printk_skb: 5 callbacks suppressed
	[Dec21 21:14] kauditd_printk_skb: 5 callbacks suppressed
	[Dec21 21:16] kauditd_printk_skb: 92 callbacks suppressed
	[ +21.145580] kauditd_printk_skb: 5 callbacks suppressed
	[Dec21 21:17] kauditd_printk_skb: 5 callbacks suppressed
	[Dec21 21:18] kauditd_printk_skb: 5 callbacks suppressed
	[Dec21 21:19] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> kernel <==
	 21:19:48 up 17 min,  0 users,  load average: 0.01, 0.09, 0.09
	Linux cert-expiration-514100 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Dec 20 21:36:01 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [564b74efa6ed642b973781255c5d398e6666e8b401f9e6781bca2d30d7bed1b1] <==
	W1221 21:18:45.971124       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:18:45.971183       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1221 21:18:45.971747       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1221 21:18:45.988437       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1221 21:18:45.993384       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1221 21:18:45.993418       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1221 21:18:45.993626       1 instance.go:239] Using reconciler: lease
	W1221 21:18:45.994623       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1221 21:18:45.994684       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:18:46.971633       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:18:46.971662       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:18:46.995989       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:18:48.281201       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:18:48.623134       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:18:48.839805       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:18:50.757243       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:18:50.788222       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:18:51.292764       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:18:54.517760       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:18:55.064858       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:18:55.189415       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:19:01.487201       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:19:01.812950       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:19:02.145791       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1221 21:19:05.994852       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kubelet <==
	Dec 21 21:19:36 cert-expiration-514100 kubelet[11099]: E1221 21:19:36.142237   11099 kuberuntime_manager.go:1449] "Unhandled Error" err="container etcd start failed in pod etcd-cert-expiration-514100_kube-system(d0e651862cd101b620bc4593b962b61d): CreateContainerError: the container name \"k8s_etcd_etcd-cert-expiration-514100_kube-system_d0e651862cd101b620bc4593b962b61d_1\" is already in use by cfde7947b3f03163c7421f04f31d975e3a2738bf6d77ac89965299cc4b35860b. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Dec 21 21:19:36 cert-expiration-514100 kubelet[11099]: E1221 21:19:36.142268   11099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"the container name \\\"k8s_etcd_etcd-cert-expiration-514100_kube-system_d0e651862cd101b620bc4593b962b61d_1\\\" is already in use by cfde7947b3f03163c7421f04f31d975e3a2738bf6d77ac89965299cc4b35860b. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/etcd-cert-expiration-514100" podUID="d0e651862cd101b620bc4593b962b61d"
	Dec 21 21:19:37 cert-expiration-514100 kubelet[11099]: E1221 21:19:37.237966   11099 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766351977237683732  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:128011}  inodes_used:{value:57}}"
	Dec 21 21:19:37 cert-expiration-514100 kubelet[11099]: E1221 21:19:37.238005   11099 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766351977237683732  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:128011}  inodes_used:{value:57}}"
	Dec 21 21:19:41 cert-expiration-514100 kubelet[11099]: E1221 21:19:41.143847   11099 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.50.159:8443/api/v1/namespaces/default/events\": dial tcp 192.168.50.159:8443: connect: connection refused" event="&Event{ObjectMeta:{cert-expiration-514100.1883578e3690a62f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:cert-expiration-514100,UID:cert-expiration-514100,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node cert-expiration-514100 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:cert-expiration-514100,},FirstTimestamp:2025-12-21 21:15:47.171788335 +0000 UTC m=+1.296694244,LastTimestamp:2025-12-21 21:15:47.171788335 +0000 UTC m=+1.296694244,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Report
ingInstance:cert-expiration-514100,}"
	Dec 21 21:19:41 cert-expiration-514100 kubelet[11099]: I1221 21:19:41.259116   11099 kubelet_node_status.go:75] "Attempting to register node" node="cert-expiration-514100"
	Dec 21 21:19:41 cert-expiration-514100 kubelet[11099]: E1221 21:19:41.259586   11099 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.50.159:8443/api/v1/nodes\": dial tcp 192.168.50.159:8443: connect: connection refused" node="cert-expiration-514100"
	Dec 21 21:19:42 cert-expiration-514100 kubelet[11099]: E1221 21:19:42.004073   11099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.50.159:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/cert-expiration-514100?timeout=10s\": dial tcp 192.168.50.159:8443: connect: connection refused" interval="7s"
	Dec 21 21:19:46 cert-expiration-514100 kubelet[11099]: E1221 21:19:46.108260   11099 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.50.159:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.50.159:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Dec 21 21:19:46 cert-expiration-514100 kubelet[11099]: E1221 21:19:46.134963   11099 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"cert-expiration-514100\" not found" node="cert-expiration-514100"
	Dec 21 21:19:46 cert-expiration-514100 kubelet[11099]: E1221 21:19:46.141295   11099 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-scheduler_kube-scheduler-cert-expiration-514100_kube-system_d58132dceef04c6e421fdc7e14d5a922_1\" is already in use by afd84a5ed51296b50c193f9978d5d6033c37813b8fd9320a1e83dc09ce99d7e1. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="764a92d5194a04382ed3d313a4aad6f69007b57746f4784cc969b5b26f525871"
	Dec 21 21:19:46 cert-expiration-514100 kubelet[11099]: E1221 21:19:46.141418   11099 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-scheduler start failed in pod kube-scheduler-cert-expiration-514100_kube-system(d58132dceef04c6e421fdc7e14d5a922): CreateContainerError: the container name \"k8s_kube-scheduler_kube-scheduler-cert-expiration-514100_kube-system_d58132dceef04c6e421fdc7e14d5a922_1\" is already in use by afd84a5ed51296b50c193f9978d5d6033c37813b8fd9320a1e83dc09ce99d7e1. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Dec 21 21:19:46 cert-expiration-514100 kubelet[11099]: E1221 21:19:46.141454   11099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"the container name \\\"k8s_kube-scheduler_kube-scheduler-cert-expiration-514100_kube-system_d58132dceef04c6e421fdc7e14d5a922_1\\\" is already in use by afd84a5ed51296b50c193f9978d5d6033c37813b8fd9320a1e83dc09ce99d7e1. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-scheduler-cert-expiration-514100" podUID="d58132dceef04c6e421fdc7e14d5a922"
	Dec 21 21:19:47 cert-expiration-514100 kubelet[11099]: E1221 21:19:47.134945   11099 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"cert-expiration-514100\" not found" node="cert-expiration-514100"
	Dec 21 21:19:47 cert-expiration-514100 kubelet[11099]: E1221 21:19:47.135293   11099 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"cert-expiration-514100\" not found" node="cert-expiration-514100"
	Dec 21 21:19:47 cert-expiration-514100 kubelet[11099]: I1221 21:19:47.135460   11099 scope.go:117] "RemoveContainer" containerID="564b74efa6ed642b973781255c5d398e6666e8b401f9e6781bca2d30d7bed1b1"
	Dec 21 21:19:47 cert-expiration-514100 kubelet[11099]: E1221 21:19:47.135564   11099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-cert-expiration-514100_kube-system(8d0d7b6604933f25e744a4a744d672ea)\"" pod="kube-system/kube-apiserver-cert-expiration-514100" podUID="8d0d7b6604933f25e744a4a744d672ea"
	Dec 21 21:19:47 cert-expiration-514100 kubelet[11099]: E1221 21:19:47.144862   11099 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-controller-manager_kube-controller-manager-cert-expiration-514100_kube-system_674af66d900ad04d23a0ba179ca31920_1\" is already in use by 161c50288e80caab2e8621a35c5737464efd636119f5dfe91c1aa6f42b8f86ec. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="f46749f272343896bb118a401c8bcfbed1939b4cf3c0be252e2d5429e21f620f"
	Dec 21 21:19:47 cert-expiration-514100 kubelet[11099]: E1221 21:19:47.144963   11099 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-controller-manager start failed in pod kube-controller-manager-cert-expiration-514100_kube-system(674af66d900ad04d23a0ba179ca31920): CreateContainerError: the container name \"k8s_kube-controller-manager_kube-controller-manager-cert-expiration-514100_kube-system_674af66d900ad04d23a0ba179ca31920_1\" is already in use by 161c50288e80caab2e8621a35c5737464efd636119f5dfe91c1aa6f42b8f86ec. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Dec 21 21:19:47 cert-expiration-514100 kubelet[11099]: E1221 21:19:47.145059   11099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"the container name \\\"k8s_kube-controller-manager_kube-controller-manager-cert-expiration-514100_kube-system_674af66d900ad04d23a0ba179ca31920_1\\\" is already in use by 161c50288e80caab2e8621a35c5737464efd636119f5dfe91c1aa6f42b8f86ec. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-controller-manager-cert-expiration-514100" podUID="674af66d900ad04d23a0ba179ca31920"
	Dec 21 21:19:47 cert-expiration-514100 kubelet[11099]: E1221 21:19:47.239501   11099 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766351987239142790  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:128011}  inodes_used:{value:57}}"
	Dec 21 21:19:47 cert-expiration-514100 kubelet[11099]: E1221 21:19:47.239524   11099 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766351987239142790  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:128011}  inodes_used:{value:57}}"
	Dec 21 21:19:48 cert-expiration-514100 kubelet[11099]: I1221 21:19:48.261758   11099 kubelet_node_status.go:75] "Attempting to register node" node="cert-expiration-514100"
	Dec 21 21:19:48 cert-expiration-514100 kubelet[11099]: E1221 21:19:48.262283   11099 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.50.159:8443/api/v1/nodes\": dial tcp 192.168.50.159:8443: connect: connection refused" node="cert-expiration-514100"
	Dec 21 21:19:49 cert-expiration-514100 kubelet[11099]: E1221 21:19:49.004981   11099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.50.159:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/cert-expiration-514100?timeout=10s\": dial tcp 192.168.50.159:8443: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p cert-expiration-514100 -n cert-expiration-514100
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p cert-expiration-514100 -n cert-expiration-514100: exit status 2 (197.900856ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "cert-expiration-514100" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "cert-expiration-514100" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-514100
--- FAIL: TestCertExpiration (1058.09s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-555265 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-555265 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-555265 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-555265 --alsologtostderr -v=1] stderr:
I1221 19:57:10.246860  132202 out.go:360] Setting OutFile to fd 1 ...
I1221 19:57:10.246980  132202 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:57:10.246989  132202 out.go:374] Setting ErrFile to fd 2...
I1221 19:57:10.246993  132202 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:57:10.247563  132202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
I1221 19:57:10.248066  132202 mustload.go:66] Loading cluster: functional-555265
I1221 19:57:10.248974  132202 config.go:182] Loaded profile config "functional-555265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:57:10.250967  132202 host.go:66] Checking if "functional-555265" exists ...
I1221 19:57:10.251157  132202 api_server.go:166] Checking apiserver status ...
I1221 19:57:10.251205  132202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1221 19:57:10.253409  132202 main.go:144] libmachine: domain functional-555265 has defined MAC address 52:54:00:0e:a2:3b in network mk-functional-555265
I1221 19:57:10.253798  132202 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0e:a2:3b", ip: ""} in network mk-functional-555265: {Iface:virbr1 ExpiryTime:2025-12-21 20:53:57 +0000 UTC Type:0 Mac:52:54:00:0e:a2:3b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-555265 Clientid:01:52:54:00:0e:a2:3b}
I1221 19:57:10.253827  132202 main.go:144] libmachine: domain functional-555265 has defined IP address 192.168.39.15 and MAC address 52:54:00:0e:a2:3b in network mk-functional-555265
I1221 19:57:10.253963  132202 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/functional-555265/id_rsa Username:docker}
I1221 19:57:10.350735  132202 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6341/cgroup
W1221 19:57:10.364803  132202 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6341/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1221 19:57:10.364871  132202 ssh_runner.go:195] Run: ls
I1221 19:57:10.369918  132202 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8441/healthz ...
I1221 19:57:10.376228  132202 api_server.go:279] https://192.168.39.15:8441/healthz returned 200:
ok
W1221 19:57:10.376279  132202 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1221 19:57:10.376434  132202 config.go:182] Loaded profile config "functional-555265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:57:10.376453  132202 addons.go:70] Setting dashboard=true in profile "functional-555265"
I1221 19:57:10.376463  132202 addons.go:239] Setting addon dashboard=true in "functional-555265"
I1221 19:57:10.376500  132202 host.go:66] Checking if "functional-555265" exists ...
I1221 19:57:10.379952  132202 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1221 19:57:10.381506  132202 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1221 19:57:10.382819  132202 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1221 19:57:10.382834  132202 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1221 19:57:10.385124  132202 main.go:144] libmachine: domain functional-555265 has defined MAC address 52:54:00:0e:a2:3b in network mk-functional-555265
I1221 19:57:10.385524  132202 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0e:a2:3b", ip: ""} in network mk-functional-555265: {Iface:virbr1 ExpiryTime:2025-12-21 20:53:57 +0000 UTC Type:0 Mac:52:54:00:0e:a2:3b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-555265 Clientid:01:52:54:00:0e:a2:3b}
I1221 19:57:10.385549  132202 main.go:144] libmachine: domain functional-555265 has defined IP address 192.168.39.15 and MAC address 52:54:00:0e:a2:3b in network mk-functional-555265
I1221 19:57:10.385667  132202 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/functional-555265/id_rsa Username:docker}
I1221 19:57:10.483925  132202 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1221 19:57:10.483956  132202 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1221 19:57:10.508433  132202 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1221 19:57:10.508518  132202 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1221 19:57:10.529761  132202 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1221 19:57:10.529815  132202 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1221 19:57:10.551679  132202 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1221 19:57:10.551704  132202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1221 19:57:10.572814  132202 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1221 19:57:10.572846  132202 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1221 19:57:10.593769  132202 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1221 19:57:10.593802  132202 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1221 19:57:10.614668  132202 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1221 19:57:10.614697  132202 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1221 19:57:10.634835  132202 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1221 19:57:10.634861  132202 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1221 19:57:10.656577  132202 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1221 19:57:10.656611  132202 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1221 19:57:10.677668  132202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1221 19:57:11.368734  132202 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-555265 addons enable metrics-server

                                                
                                                
I1221 19:57:11.369954  132202 addons.go:202] Writing out "functional-555265" config to set dashboard=true...
W1221 19:57:11.370282  132202 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1221 19:57:11.371001  132202 kapi.go:59] client config for functional-555265: &rest.Config{Host:"https://192.168.39.15:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.key", CAFile:"/home/jenkins/minikube-integration/22179-122429/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2867280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1221 19:57:11.371460  132202 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1221 19:57:11.371477  132202 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1221 19:57:11.371502  132202 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1221 19:57:11.371507  132202 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1221 19:57:11.371512  132202 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1221 19:57:11.379946  132202 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  e3493d5a-a7c9-4e04-862e-4f5cbb955ab9 857 0 2025-12-21 19:57:11 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-21 19:57:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.103.17.227,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.103.17.227],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1221 19:57:11.380113  132202 out.go:285] * Launching proxy ...
* Launching proxy ...
I1221 19:57:11.380183  132202 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-555265 proxy --port 36195]
I1221 19:57:11.380656  132202 dashboard.go:159] Waiting for kubectl to output host:port ...
I1221 19:57:11.422732  132202 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1221 19:57:11.422772  132202 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1221 19:57:11.436938  132202 retry.go:84] will retry after 0s: Temporary Error: unexpected response code: 503
I1221 19:57:23.536381  132202 retry.go:84] will retry after 5.1s: Temporary Error: unexpected response code: 503
I1221 19:57:28.673904  132202 retry.go:84] will retry after 6.5s: Temporary Error: unexpected response code: 503
I1221 19:57:35.218418  132202 retry.go:84] will retry after 18.4s: Temporary Error: unexpected response code: 503
I1221 19:57:53.666543  132202 retry.go:84] will retry after 11.4s: Temporary Error: unexpected response code: 503 (duplicate log for 42.2s)
I1221 19:58:05.110651  132202 retry.go:74] will retry after 33.6s: stuck on same error as above for 53.7s...
I1221 19:58:38.722311  132202 retry.go:74] will retry after 46.2s: stuck on same error as above for 1m27.3s...
I1221 19:59:24.993135  132202 retry.go:74] will retry after 1m19s: stuck on same error as above for 2m13.6s...
I1221 20:00:43.955234  132202 retry.go:74] will retry after 30.8s: stuck on same error as above for 3m32.5s...
I1221 20:01:14.769872  132202 retry.go:74] will retry after 1m19.6s: stuck on same error as above for 4m3.3s...
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-555265 -n functional-555265
helpers_test.go:253: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-555265 logs -n 25: (1.387446334s)
helpers_test.go:261: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-555265 image ls                                                                                                                                   │ functional-555265 │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │ 21 Dec 25 19:57 UTC │
	│ image          │ functional-555265 image save kicbase/echo-server:functional-555265 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-555265 │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │ 21 Dec 25 19:57 UTC │
	│ image          │ functional-555265 image rm kicbase/echo-server:functional-555265 --alsologtostderr                                                                           │ functional-555265 │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │ 21 Dec 25 19:57 UTC │
	│ image          │ functional-555265 image ls                                                                                                                                   │ functional-555265 │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │ 21 Dec 25 19:57 UTC │
	│ image          │ functional-555265 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-555265 │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │ 21 Dec 25 19:57 UTC │
	│ image          │ functional-555265 image ls                                                                                                                                   │ functional-555265 │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │ 21 Dec 25 19:57 UTC │
	│ image          │ functional-555265 image save --daemon kicbase/echo-server:functional-555265 --alsologtostderr                                                                │ functional-555265 │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │ 21 Dec 25 19:57 UTC │
	│ license        │                                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │ 21 Dec 25 19:57 UTC │
	│ ssh            │ functional-555265 ssh sudo cat /etc/ssl/certs/126345.pem                                                                                                     │ functional-555265 │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │ 21 Dec 25 19:57 UTC │
	│ ssh            │ functional-555265 ssh sudo cat /usr/share/ca-certificates/126345.pem                                                                                         │ functional-555265 │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │ 21 Dec 25 19:57 UTC │
	│ ssh            │ functional-555265 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                     │ functional-555265 │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │ 21 Dec 25 19:57 UTC │
	│ ssh            │ functional-555265 ssh sudo cat /etc/ssl/certs/1263452.pem                                                                                                    │ functional-555265 │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │ 21 Dec 25 19:57 UTC │
	│ ssh            │ functional-555265 ssh sudo cat /usr/share/ca-certificates/1263452.pem                                                                                        │ functional-555265 │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │ 21 Dec 25 19:57 UTC │
	│ ssh            │ functional-555265 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                     │ functional-555265 │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │ 21 Dec 25 19:57 UTC │
	│ ssh            │ functional-555265 ssh sudo cat /etc/test/nested/copy/126345/hosts                                                                                            │ functional-555265 │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │ 21 Dec 25 19:57 UTC │
	│ image          │ functional-555265 image ls --format short --alsologtostderr                                                                                                  │ functional-555265 │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │ 21 Dec 25 19:57 UTC │
	│ image          │ functional-555265 image ls --format yaml --alsologtostderr                                                                                                   │ functional-555265 │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │ 21 Dec 25 19:57 UTC │
	│ ssh            │ functional-555265 ssh pgrep buildkitd                                                                                                                        │ functional-555265 │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │                     │
	│ image          │ functional-555265 image build -t localhost/my-image:functional-555265 testdata/build --alsologtostderr                                                       │ functional-555265 │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │ 21 Dec 25 19:57 UTC │
	│ image          │ functional-555265 image ls                                                                                                                                   │ functional-555265 │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │ 21 Dec 25 19:57 UTC │
	│ image          │ functional-555265 image ls --format json --alsologtostderr                                                                                                   │ functional-555265 │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │ 21 Dec 25 19:57 UTC │
	│ image          │ functional-555265 image ls --format table --alsologtostderr                                                                                                  │ functional-555265 │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │ 21 Dec 25 19:57 UTC │
	│ update-context │ functional-555265 update-context --alsologtostderr -v=2                                                                                                      │ functional-555265 │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │ 21 Dec 25 19:57 UTC │
	│ update-context │ functional-555265 update-context --alsologtostderr -v=2                                                                                                      │ functional-555265 │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │ 21 Dec 25 19:57 UTC │
	│ update-context │ functional-555265 update-context --alsologtostderr -v=2                                                                                                      │ functional-555265 │ jenkins │ v1.37.0 │ 21 Dec 25 19:57 UTC │ 21 Dec 25 19:57 UTC │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 19:57:10
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 19:57:10.140460  132186 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:57:10.140739  132186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:57:10.140749  132186 out.go:374] Setting ErrFile to fd 2...
	I1221 19:57:10.140753  132186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:57:10.140947  132186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
	I1221 19:57:10.141369  132186 out.go:368] Setting JSON to false
	I1221 19:57:10.142172  132186 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":13180,"bootTime":1766333850,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 19:57:10.142228  132186 start.go:143] virtualization: kvm guest
	I1221 19:57:10.143965  132186 out.go:179] * [functional-555265] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 19:57:10.145619  132186 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 19:57:10.145614  132186 notify.go:221] Checking for updates...
	I1221 19:57:10.148234  132186 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 19:57:10.149690  132186 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	I1221 19:57:10.151170  132186 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	I1221 19:57:10.152716  132186 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 19:57:10.154012  132186 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 19:57:10.155584  132186 config.go:182] Loaded profile config "functional-555265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:57:10.156085  132186 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 19:57:10.185706  132186 out.go:179] * Using the kvm2 driver based on existing profile
	I1221 19:57:10.186834  132186 start.go:309] selected driver: kvm2
	I1221 19:57:10.186850  132186 start.go:928] validating driver "kvm2" against &{Name:functional-555265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-555265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 19:57:10.186971  132186 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 19:57:10.188266  132186 cni.go:84] Creating CNI manager for ""
	I1221 19:57:10.188343  132186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1221 19:57:10.188388  132186 start.go:353] cluster config:
	{Name:functional-555265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-555265 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 19:57:10.189675  132186 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 21 20:02:10 functional-555265 crio[5262]: time="2025-12-21 20:02:10.946353666Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766347330946331095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:245802,},InodesUsed:&UInt64Value{Value:115,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7cfa52d-099d-414d-a0ce-71a5d8f86f96 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 20:02:10 functional-555265 crio[5262]: time="2025-12-21 20:02:10.947234760Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a55e71f4-2446-4167-a5d8-ff171a2749b8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:02:10 functional-555265 crio[5262]: time="2025-12-21 20:02:10.947285902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a55e71f4-2446-4167-a5d8-ff171a2749b8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:02:10 functional-555265 crio[5262]: time="2025-12-21 20:02:10.947662843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:296ce91e04ca71f673a30f01607a92e742b91d4eeac78e8e467d6dc922083709,PodSandboxId:83b146cab8d02696d12b308a1d0852bcd7986dc4d3860682af6990af69aa48c0,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1766347166889697497,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-vtlfg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12bfaa54-0a64-409e-9dae-6f1c3a619396,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ca1813ed0745829100e4ec6b63a060d2f66b9b97d9251dfa3d231f6b93c1a54,PodSandboxId:6183593a879c6658e666b43ce8d5b7bf5cf07b25bc4be5389c49286b32776f7d,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766347065579694985,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2a95a66b-135b-4cbd-8fad-a3ff9e0dd625,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77b28b936f62d149b541872ecafded46e7355dd1992d6eb716758ffe1ae92666,PodSandboxId:44567e963353c12d4e1d7f57c9a661c93b637c17c995db8e1f833d0c6b1aff4a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766347054597548046,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc92143a-3635-4b7b-b54f-0bf476a137f8,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f3052de92f8784044166583b1725742ba5e270e4d29aaedc884b532693efccf,PodSandboxId:db3c47c731f3e13f9efa1c1ebeac7198fd8de47b8e009772855e54b24892f92d,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1766347022189697263,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-xqzsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5ddd6a39-8c3b-401f-bf65-e01981d7058f,},Annotations:map[string]string{io.kubernete
s.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa48c80264df2b97af2e1413c8d5987e42bb5355a979c1ff41dbee180ece4dc,PodSandboxId:8c54cc43983f53c755cea84360f6956e9d6af816e260d2e0621b328009bd0d13,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766346969209628659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-865k9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 259daaed-1bac-4381-bb6d-f6b9d71b2fa2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c42a0ed94a481088e2cdca6dcca665b78adfc77f7292aac70a766d0ce90d4fa,PodSandboxId:adc133ae5cc844884a3cfc280d4f4cb64fd7ca9ef5143661cd3818827f573451,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2f
bda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766346969181548388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf89d3de-8549-43e7-b379-c34189591f83,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3618fd4b023093af252199fa1ee9bc75e4580afdcaa465101865015895a7bfca,PodSandboxId:9250a265bdc924c711e45d2da820344a7b605c97dddca5ada86b6af54b258cd0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Stat
e:CONTAINER_RUNNING,CreatedAt:1766346965743889111,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744a51cd1d2c8f1a92dde4d0e43c1759,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6927835ec040fa0b73ee9b44006f2471e5cb0494c6bdb7dfa623cec1fbdca20,PodSandboxId:5b986aecfc5f8e866f637a72d0b804963bf3b1ca99344b04fbe59ae1caee48b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766346962113937826,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4360ac378721e5de4af5616b101eef03,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3b4ad98d7aea656a9426955636e2e91e4532d026421763639823360d60e635a,PodSandboxId:a15a733b1d4339704411019c010e4b3664c8133829236083c9bd62c9c3eee324,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b9457
1f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766346962053288004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708cee050b8a70424c8c67e5a60add8d,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0700344cab1d708d861280d6ff95b5fcb8eccac1b11fc6fb9126c3cca634339,PodSandboxId:d728f016a1af56d003ba0ae75710ca7835bec21bfb4cdb80573fb81e6b8b8920,
Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766346961986055343,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6msqw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7b24cef-0b60-4cdf-945b-873716695f79,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:627abc545b49b8208289fa65da1227e32b745e2e57d6b6dfa4c062d48fe015a0,PodSandboxId:1f2c925fcd7d0fe221d257b307abb0bde56d00713e795ab8e62d675a68890bcc,Metadata:&ContainerMetadata{Name:
kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766346961983591666,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb32730d461be579507ea361d141f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:952aa0f304e02b8cf78944e2ab9460db0849e8c
0684b73279828f0e3972012ac,PodSandboxId:adc133ae5cc844884a3cfc280d4f4cb64fd7ca9ef5143661cd3818827f573451,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766346961889740331,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf89d3de-8549-43e7-b379-c34189591f83,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b0d0902e72f47336f33c933dae8ed41b926e930e26b6a76260
5786cf77799d,PodSandboxId:d2bdec683fec565edea26d07963974f82b75f50c3467ce03e8aad7f7c5da0875,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766346924871438854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-865k9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 259daaed-1bac-4381-bb6d-f6b9d71b2fa2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"r
eadiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04ad93c3ade023f9897a34c894de2761e50b1d46dd782786a3fa4e26fe0d5c26,PodSandboxId:de383b766fd62e53c249e3327edcc4a2f8ba9748bb597ad5f2ee3954c23a30f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766346924433696992,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6msqw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7b24cef-0b60-4cdf-945b-873716695f79,},Annotations:map[string]string{io.kuberne
tes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60c9b86442454516dd6f80149e1e0346b5dab155158f067dee1c2aca6a381df2,PodSandboxId:57ae321d6c4eb7dd1b895f22557c0b0cd333b8f0cd0795879843663573a9f929,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766346920686841731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4360ac378721e5de4af5616b101eef03,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.c
ontainer.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ad1019c8cd0d09386ffad668538c9b8d6a910afe19274b9ad5775f3d7a4654e,PodSandboxId:3da12b148c37cde81b1bc021647daebfdb4d43286fa213b579bb843d89adfce9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766346920638945319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a79ca6a93849
14d0257966b88a1889fa,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2f00d7219b6f59cd93bc3f041f22f383c452541d5aa45f345398df455e5d5ae,PodSandboxId:8807bc8b7445f5651e77220c575ad6dca3efbae4dd96b56275d5fdcfbe6abb05,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766346920642426600,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708cee050b8a70424c8c67e5a60add8d,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849fc6d111bf2907551b17ddfe5c82d104a82d9c3633e2c380cd6170812f0d2a,PodSandboxId:4cd4d68d2097141ef8d12ddac95a43ab9fdb1efe22a58f0be0f4b5681596c878,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedA
t:1766346920616503727,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb32730d461be579507ea361d141f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a55e71f4-2446-4167-a5d8-ff171a2749b8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:02:10 functional-555265 crio[5262]: time="2025-12-21 20:02:10.987384992Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d0ee803-83fb-4e2f-8aa9-6c69a9039a04 name=/runtime.v1.RuntimeService/Version
	Dec 21 20:02:10 functional-555265 crio[5262]: time="2025-12-21 20:02:10.987634741Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d0ee803-83fb-4e2f-8aa9-6c69a9039a04 name=/runtime.v1.RuntimeService/Version
	Dec 21 20:02:10 functional-555265 crio[5262]: time="2025-12-21 20:02:10.989217186Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=374b5b53-bf97-49a2-a1ac-9b61a51b7a9a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 20:02:10 functional-555265 crio[5262]: time="2025-12-21 20:02:10.990343959Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766347330990317233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:245802,},InodesUsed:&UInt64Value{Value:115,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=374b5b53-bf97-49a2-a1ac-9b61a51b7a9a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 20:02:10 functional-555265 crio[5262]: time="2025-12-21 20:02:10.991500492Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46ef0070-b2e4-4578-973b-49a937157aa5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:02:10 functional-555265 crio[5262]: time="2025-12-21 20:02:10.991573852Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46ef0070-b2e4-4578-973b-49a937157aa5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:02:10 functional-555265 crio[5262]: time="2025-12-21 20:02:10.992224448Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:296ce91e04ca71f673a30f01607a92e742b91d4eeac78e8e467d6dc922083709,PodSandboxId:83b146cab8d02696d12b308a1d0852bcd7986dc4d3860682af6990af69aa48c0,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1766347166889697497,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-vtlfg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12bfaa54-0a64-409e-9dae-6f1c3a619396,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ca1813ed0745829100e4ec6b63a060d2f66b9b97d9251dfa3d231f6b93c1a54,PodSandboxId:6183593a879c6658e666b43ce8d5b7bf5cf07b25bc4be5389c49286b32776f7d,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766347065579694985,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2a95a66b-135b-4cbd-8fad-a3ff9e0dd625,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77b28b936f62d149b541872ecafded46e7355dd1992d6eb716758ffe1ae92666,PodSandboxId:44567e963353c12d4e1d7f57c9a661c93b637c17c995db8e1f833d0c6b1aff4a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766347054597548046,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc92143a-3635-4b7b-b54f-0bf476a137f8,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f3052de92f8784044166583b1725742ba5e270e4d29aaedc884b532693efccf,PodSandboxId:db3c47c731f3e13f9efa1c1ebeac7198fd8de47b8e009772855e54b24892f92d,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1766347022189697263,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-xqzsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5ddd6a39-8c3b-401f-bf65-e01981d7058f,},Annotations:map[string]string{io.kubernete
s.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa48c80264df2b97af2e1413c8d5987e42bb5355a979c1ff41dbee180ece4dc,PodSandboxId:8c54cc43983f53c755cea84360f6956e9d6af816e260d2e0621b328009bd0d13,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766346969209628659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-865k9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 259daaed-1bac-4381-bb6d-f6b9d71b2fa2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c42a0ed94a481088e2cdca6dcca665b78adfc77f7292aac70a766d0ce90d4fa,PodSandboxId:adc133ae5cc844884a3cfc280d4f4cb64fd7ca9ef5143661cd3818827f573451,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2f
bda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766346969181548388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf89d3de-8549-43e7-b379-c34189591f83,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3618fd4b023093af252199fa1ee9bc75e4580afdcaa465101865015895a7bfca,PodSandboxId:9250a265bdc924c711e45d2da820344a7b605c97dddca5ada86b6af54b258cd0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Stat
e:CONTAINER_RUNNING,CreatedAt:1766346965743889111,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744a51cd1d2c8f1a92dde4d0e43c1759,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6927835ec040fa0b73ee9b44006f2471e5cb0494c6bdb7dfa623cec1fbdca20,PodSandboxId:5b986aecfc5f8e866f637a72d0b804963bf3b1ca99344b04fbe59ae1caee48b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766346962113937826,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4360ac378721e5de4af5616b101eef03,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3b4ad98d7aea656a9426955636e2e91e4532d026421763639823360d60e635a,PodSandboxId:a15a733b1d4339704411019c010e4b3664c8133829236083c9bd62c9c3eee324,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b9457
1f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766346962053288004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708cee050b8a70424c8c67e5a60add8d,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0700344cab1d708d861280d6ff95b5fcb8eccac1b11fc6fb9126c3cca634339,PodSandboxId:d728f016a1af56d003ba0ae75710ca7835bec21bfb4cdb80573fb81e6b8b8920,
Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766346961986055343,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6msqw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7b24cef-0b60-4cdf-945b-873716695f79,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:627abc545b49b8208289fa65da1227e32b745e2e57d6b6dfa4c062d48fe015a0,PodSandboxId:1f2c925fcd7d0fe221d257b307abb0bde56d00713e795ab8e62d675a68890bcc,Metadata:&ContainerMetadata{Name:
kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766346961983591666,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb32730d461be579507ea361d141f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:952aa0f304e02b8cf78944e2ab9460db0849e8c
0684b73279828f0e3972012ac,PodSandboxId:adc133ae5cc844884a3cfc280d4f4cb64fd7ca9ef5143661cd3818827f573451,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766346961889740331,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf89d3de-8549-43e7-b379-c34189591f83,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b0d0902e72f47336f33c933dae8ed41b926e930e26b6a76260
5786cf77799d,PodSandboxId:d2bdec683fec565edea26d07963974f82b75f50c3467ce03e8aad7f7c5da0875,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766346924871438854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-865k9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 259daaed-1bac-4381-bb6d-f6b9d71b2fa2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"r
eadiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04ad93c3ade023f9897a34c894de2761e50b1d46dd782786a3fa4e26fe0d5c26,PodSandboxId:de383b766fd62e53c249e3327edcc4a2f8ba9748bb597ad5f2ee3954c23a30f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766346924433696992,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6msqw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7b24cef-0b60-4cdf-945b-873716695f79,},Annotations:map[string]string{io.kuberne
tes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60c9b86442454516dd6f80149e1e0346b5dab155158f067dee1c2aca6a381df2,PodSandboxId:57ae321d6c4eb7dd1b895f22557c0b0cd333b8f0cd0795879843663573a9f929,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766346920686841731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4360ac378721e5de4af5616b101eef03,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.c
ontainer.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ad1019c8cd0d09386ffad668538c9b8d6a910afe19274b9ad5775f3d7a4654e,PodSandboxId:3da12b148c37cde81b1bc021647daebfdb4d43286fa213b579bb843d89adfce9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766346920638945319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a79ca6a93849
14d0257966b88a1889fa,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2f00d7219b6f59cd93bc3f041f22f383c452541d5aa45f345398df455e5d5ae,PodSandboxId:8807bc8b7445f5651e77220c575ad6dca3efbae4dd96b56275d5fdcfbe6abb05,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766346920642426600,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708cee050b8a70424c8c67e5a60add8d,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849fc6d111bf2907551b17ddfe5c82d104a82d9c3633e2c380cd6170812f0d2a,PodSandboxId:4cd4d68d2097141ef8d12ddac95a43ab9fdb1efe22a58f0be0f4b5681596c878,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedA
t:1766346920616503727,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb32730d461be579507ea361d141f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=46ef0070-b2e4-4578-973b-49a937157aa5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:02:11 functional-555265 crio[5262]: time="2025-12-21 20:02:11.028346725Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5dba5c3d-00d4-4c0a-8a50-d833c9e8e467 name=/runtime.v1.RuntimeService/Version
	Dec 21 20:02:11 functional-555265 crio[5262]: time="2025-12-21 20:02:11.028501888Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5dba5c3d-00d4-4c0a-8a50-d833c9e8e467 name=/runtime.v1.RuntimeService/Version
	Dec 21 20:02:11 functional-555265 crio[5262]: time="2025-12-21 20:02:11.030187225Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=151d5880-10bc-4ac1-a8d5-d6a9a92ad06f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 20:02:11 functional-555265 crio[5262]: time="2025-12-21 20:02:11.031402995Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766347331031375574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:245802,},InodesUsed:&UInt64Value{Value:115,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=151d5880-10bc-4ac1-a8d5-d6a9a92ad06f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 20:02:11 functional-555265 crio[5262]: time="2025-12-21 20:02:11.032868047Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98540456-67ce-4131-a06a-175cc902fe9d name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:02:11 functional-555265 crio[5262]: time="2025-12-21 20:02:11.033128615Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98540456-67ce-4131-a06a-175cc902fe9d name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:02:11 functional-555265 crio[5262]: time="2025-12-21 20:02:11.033698463Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:296ce91e04ca71f673a30f01607a92e742b91d4eeac78e8e467d6dc922083709,PodSandboxId:83b146cab8d02696d12b308a1d0852bcd7986dc4d3860682af6990af69aa48c0,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1766347166889697497,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-vtlfg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12bfaa54-0a64-409e-9dae-6f1c3a619396,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ca1813ed0745829100e4ec6b63a060d2f66b9b97d9251dfa3d231f6b93c1a54,PodSandboxId:6183593a879c6658e666b43ce8d5b7bf5cf07b25bc4be5389c49286b32776f7d,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766347065579694985,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2a95a66b-135b-4cbd-8fad-a3ff9e0dd625,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77b28b936f62d149b541872ecafded46e7355dd1992d6eb716758ffe1ae92666,PodSandboxId:44567e963353c12d4e1d7f57c9a661c93b637c17c995db8e1f833d0c6b1aff4a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766347054597548046,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc92143a-3635-4b7b-b54f-0bf476a137f8,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f3052de92f8784044166583b1725742ba5e270e4d29aaedc884b532693efccf,PodSandboxId:db3c47c731f3e13f9efa1c1ebeac7198fd8de47b8e009772855e54b24892f92d,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1766347022189697263,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-xqzsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5ddd6a39-8c3b-401f-bf65-e01981d7058f,},Annotations:map[string]string{io.kubernete
s.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa48c80264df2b97af2e1413c8d5987e42bb5355a979c1ff41dbee180ece4dc,PodSandboxId:8c54cc43983f53c755cea84360f6956e9d6af816e260d2e0621b328009bd0d13,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766346969209628659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-865k9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 259daaed-1bac-4381-bb6d-f6b9d71b2fa2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c42a0ed94a481088e2cdca6dcca665b78adfc77f7292aac70a766d0ce90d4fa,PodSandboxId:adc133ae5cc844884a3cfc280d4f4cb64fd7ca9ef5143661cd3818827f573451,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2f
bda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766346969181548388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf89d3de-8549-43e7-b379-c34189591f83,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3618fd4b023093af252199fa1ee9bc75e4580afdcaa465101865015895a7bfca,PodSandboxId:9250a265bdc924c711e45d2da820344a7b605c97dddca5ada86b6af54b258cd0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Stat
e:CONTAINER_RUNNING,CreatedAt:1766346965743889111,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744a51cd1d2c8f1a92dde4d0e43c1759,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6927835ec040fa0b73ee9b44006f2471e5cb0494c6bdb7dfa623cec1fbdca20,PodSandboxId:5b986aecfc5f8e866f637a72d0b804963bf3b1ca99344b04fbe59ae1caee48b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766346962113937826,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4360ac378721e5de4af5616b101eef03,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3b4ad98d7aea656a9426955636e2e91e4532d026421763639823360d60e635a,PodSandboxId:a15a733b1d4339704411019c010e4b3664c8133829236083c9bd62c9c3eee324,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b9457
1f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766346962053288004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708cee050b8a70424c8c67e5a60add8d,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0700344cab1d708d861280d6ff95b5fcb8eccac1b11fc6fb9126c3cca634339,PodSandboxId:d728f016a1af56d003ba0ae75710ca7835bec21bfb4cdb80573fb81e6b8b8920,
Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766346961986055343,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6msqw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7b24cef-0b60-4cdf-945b-873716695f79,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:627abc545b49b8208289fa65da1227e32b745e2e57d6b6dfa4c062d48fe015a0,PodSandboxId:1f2c925fcd7d0fe221d257b307abb0bde56d00713e795ab8e62d675a68890bcc,Metadata:&ContainerMetadata{Name:
kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766346961983591666,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb32730d461be579507ea361d141f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:952aa0f304e02b8cf78944e2ab9460db0849e8c
0684b73279828f0e3972012ac,PodSandboxId:adc133ae5cc844884a3cfc280d4f4cb64fd7ca9ef5143661cd3818827f573451,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766346961889740331,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf89d3de-8549-43e7-b379-c34189591f83,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b0d0902e72f47336f33c933dae8ed41b926e930e26b6a76260
5786cf77799d,PodSandboxId:d2bdec683fec565edea26d07963974f82b75f50c3467ce03e8aad7f7c5da0875,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766346924871438854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-865k9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 259daaed-1bac-4381-bb6d-f6b9d71b2fa2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"r
eadiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04ad93c3ade023f9897a34c894de2761e50b1d46dd782786a3fa4e26fe0d5c26,PodSandboxId:de383b766fd62e53c249e3327edcc4a2f8ba9748bb597ad5f2ee3954c23a30f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766346924433696992,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6msqw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7b24cef-0b60-4cdf-945b-873716695f79,},Annotations:map[string]string{io.kuberne
tes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60c9b86442454516dd6f80149e1e0346b5dab155158f067dee1c2aca6a381df2,PodSandboxId:57ae321d6c4eb7dd1b895f22557c0b0cd333b8f0cd0795879843663573a9f929,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766346920686841731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4360ac378721e5de4af5616b101eef03,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.c
ontainer.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ad1019c8cd0d09386ffad668538c9b8d6a910afe19274b9ad5775f3d7a4654e,PodSandboxId:3da12b148c37cde81b1bc021647daebfdb4d43286fa213b579bb843d89adfce9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766346920638945319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a79ca6a93849
14d0257966b88a1889fa,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2f00d7219b6f59cd93bc3f041f22f383c452541d5aa45f345398df455e5d5ae,PodSandboxId:8807bc8b7445f5651e77220c575ad6dca3efbae4dd96b56275d5fdcfbe6abb05,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766346920642426600,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708cee050b8a70424c8c67e5a60add8d,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849fc6d111bf2907551b17ddfe5c82d104a82d9c3633e2c380cd6170812f0d2a,PodSandboxId:4cd4d68d2097141ef8d12ddac95a43ab9fdb1efe22a58f0be0f4b5681596c878,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedA
t:1766346920616503727,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb32730d461be579507ea361d141f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=98540456-67ce-4131-a06a-175cc902fe9d name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:02:11 functional-555265 crio[5262]: time="2025-12-21 20:02:11.063703829Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=addcb4a6-08c4-488c-b43a-bd0453c33377 name=/runtime.v1.RuntimeService/Version
	Dec 21 20:02:11 functional-555265 crio[5262]: time="2025-12-21 20:02:11.063824379Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=addcb4a6-08c4-488c-b43a-bd0453c33377 name=/runtime.v1.RuntimeService/Version
	Dec 21 20:02:11 functional-555265 crio[5262]: time="2025-12-21 20:02:11.065452171Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79e43352-e655-4cfb-a26d-5ee52c5b8239 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 20:02:11 functional-555265 crio[5262]: time="2025-12-21 20:02:11.066199709Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766347331066158061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:245802,},InodesUsed:&UInt64Value{Value:115,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79e43352-e655-4cfb-a26d-5ee52c5b8239 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 20:02:11 functional-555265 crio[5262]: time="2025-12-21 20:02:11.067250296Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0df60643-19f3-49c0-ade9-df02d3309f4a name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:02:11 functional-555265 crio[5262]: time="2025-12-21 20:02:11.067322681Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0df60643-19f3-49c0-ade9-df02d3309f4a name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:02:11 functional-555265 crio[5262]: time="2025-12-21 20:02:11.067687426Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:296ce91e04ca71f673a30f01607a92e742b91d4eeac78e8e467d6dc922083709,PodSandboxId:83b146cab8d02696d12b308a1d0852bcd7986dc4d3860682af6990af69aa48c0,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1766347166889697497,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-vtlfg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12bfaa54-0a64-409e-9dae-6f1c3a619396,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ca1813ed0745829100e4ec6b63a060d2f66b9b97d9251dfa3d231f6b93c1a54,PodSandboxId:6183593a879c6658e666b43ce8d5b7bf5cf07b25bc4be5389c49286b32776f7d,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766347065579694985,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2a95a66b-135b-4cbd-8fad-a3ff9e0dd625,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77b28b936f62d149b541872ecafded46e7355dd1992d6eb716758ffe1ae92666,PodSandboxId:44567e963353c12d4e1d7f57c9a661c93b637c17c995db8e1f833d0c6b1aff4a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766347054597548046,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc92143a-3635-4b7b-b54f-0bf476a137f8,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f3052de92f8784044166583b1725742ba5e270e4d29aaedc884b532693efccf,PodSandboxId:db3c47c731f3e13f9efa1c1ebeac7198fd8de47b8e009772855e54b24892f92d,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1766347022189697263,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-xqzsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5ddd6a39-8c3b-401f-bf65-e01981d7058f,},Annotations:map[string]string{io.kubernete
s.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa48c80264df2b97af2e1413c8d5987e42bb5355a979c1ff41dbee180ece4dc,PodSandboxId:8c54cc43983f53c755cea84360f6956e9d6af816e260d2e0621b328009bd0d13,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1766346969209628659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-865k9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 259daaed-1bac-4381-bb6d-f6b9d71b2fa2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c42a0ed94a481088e2cdca6dcca665b78adfc77f7292aac70a766d0ce90d4fa,PodSandboxId:adc133ae5cc844884a3cfc280d4f4cb64fd7ca9ef5143661cd3818827f573451,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2f
bda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766346969181548388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf89d3de-8549-43e7-b379-c34189591f83,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3618fd4b023093af252199fa1ee9bc75e4580afdcaa465101865015895a7bfca,PodSandboxId:9250a265bdc924c711e45d2da820344a7b605c97dddca5ada86b6af54b258cd0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Stat
e:CONTAINER_RUNNING,CreatedAt:1766346965743889111,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744a51cd1d2c8f1a92dde4d0e43c1759,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6927835ec040fa0b73ee9b44006f2471e5cb0494c6bdb7dfa623cec1fbdca20,PodSandboxId:5b986aecfc5f8e866f637a72d0b804963bf3b1ca99344b04fbe59ae1caee48b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766346962113937826,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4360ac378721e5de4af5616b101eef03,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3b4ad98d7aea656a9426955636e2e91e4532d026421763639823360d60e635a,PodSandboxId:a15a733b1d4339704411019c010e4b3664c8133829236083c9bd62c9c3eee324,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b9457
1f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766346962053288004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708cee050b8a70424c8c67e5a60add8d,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0700344cab1d708d861280d6ff95b5fcb8eccac1b11fc6fb9126c3cca634339,PodSandboxId:d728f016a1af56d003ba0ae75710ca7835bec21bfb4cdb80573fb81e6b8b8920,
Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1766346961986055343,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6msqw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7b24cef-0b60-4cdf-945b-873716695f79,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:627abc545b49b8208289fa65da1227e32b745e2e57d6b6dfa4c062d48fe015a0,PodSandboxId:1f2c925fcd7d0fe221d257b307abb0bde56d00713e795ab8e62d675a68890bcc,Metadata:&ContainerMetadata{Name:
kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766346961983591666,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb32730d461be579507ea361d141f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:952aa0f304e02b8cf78944e2ab9460db0849e8c
0684b73279828f0e3972012ac,PodSandboxId:adc133ae5cc844884a3cfc280d4f4cb64fd7ca9ef5143661cd3818827f573451,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766346961889740331,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf89d3de-8549-43e7-b379-c34189591f83,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b0d0902e72f47336f33c933dae8ed41b926e930e26b6a76260
5786cf77799d,PodSandboxId:d2bdec683fec565edea26d07963974f82b75f50c3467ce03e8aad7f7c5da0875,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766346924871438854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-865k9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 259daaed-1bac-4381-bb6d-f6b9d71b2fa2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"r
eadiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04ad93c3ade023f9897a34c894de2761e50b1d46dd782786a3fa4e26fe0d5c26,PodSandboxId:de383b766fd62e53c249e3327edcc4a2f8ba9748bb597ad5f2ee3954c23a30f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766346924433696992,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6msqw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7b24cef-0b60-4cdf-945b-873716695f79,},Annotations:map[string]string{io.kuberne
tes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60c9b86442454516dd6f80149e1e0346b5dab155158f067dee1c2aca6a381df2,PodSandboxId:57ae321d6c4eb7dd1b895f22557c0b0cd333b8f0cd0795879843663573a9f929,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766346920686841731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4360ac378721e5de4af5616b101eef03,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.c
ontainer.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ad1019c8cd0d09386ffad668538c9b8d6a910afe19274b9ad5775f3d7a4654e,PodSandboxId:3da12b148c37cde81b1bc021647daebfdb4d43286fa213b579bb843d89adfce9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766346920638945319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a79ca6a93849
14d0257966b88a1889fa,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2f00d7219b6f59cd93bc3f041f22f383c452541d5aa45f345398df455e5d5ae,PodSandboxId:8807bc8b7445f5651e77220c575ad6dca3efbae4dd96b56275d5fdcfbe6abb05,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766346920642426600,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 708cee050b8a70424c8c67e5a60add8d,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849fc6d111bf2907551b17ddfe5c82d104a82d9c3633e2c380cd6170812f0d2a,PodSandboxId:4cd4d68d2097141ef8d12ddac95a43ab9fdb1efe22a58f0be0f4b5681596c878,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedA
t:1766346920616503727,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-555265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb32730d461be579507ea361d141f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0df60643-19f3-49c0-ade9-df02d3309f4a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                         CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	296ce91e04ca7       public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036   2 minutes ago       Running             mysql                     0                   83b146cab8d02       mysql-6bcdcbc558-vtlfg                      default
	6ca1813ed0745       04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5                                              4 minutes ago       Running             myfrontend                0                   6183593a879c6       sp-pod                                      default
	77b28b936f62d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e           4 minutes ago       Exited              mount-munger              0                   44567e963353c       busybox-mount                               default
	9f3052de92f87       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6         5 minutes ago       Running             echo-server               0                   db3c47c731f3e       hello-node-connect-7d85dfc575-xqzsp         default
	cfa48c80264df       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                              6 minutes ago       Running             coredns                   2                   8c54cc43983f5       coredns-66bc5c9577-865k9                    kube-system
	2c42a0ed94a48       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              6 minutes ago       Running             storage-provisioner       3                   adc133ae5cc84       storage-provisioner                         kube-system
	3618fd4b02309       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                              6 minutes ago       Running             kube-apiserver            0                   9250a265bdc92       kube-apiserver-functional-555265            kube-system
	d6927835ec040       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                              6 minutes ago       Running             etcd                      2                   5b986aecfc5f8       etcd-functional-555265                      kube-system
	e3b4ad98d7aea       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                              6 minutes ago       Running             kube-scheduler            2                   a15a733b1d433       kube-scheduler-functional-555265            kube-system
	d0700344cab1d       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                              6 minutes ago       Running             kube-proxy                2                   d728f016a1af5       kube-proxy-6msqw                            kube-system
	627abc545b49b       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                              6 minutes ago       Running             kube-controller-manager   2                   1f2c925fcd7d0       kube-controller-manager-functional-555265   kube-system
	952aa0f304e02       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              6 minutes ago       Exited              storage-provisioner       2                   adc133ae5cc84       storage-provisioner                         kube-system
	43b0d0902e72f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                              6 minutes ago       Exited              coredns                   1                   d2bdec683fec5       coredns-66bc5c9577-865k9                    kube-system
	04ad93c3ade02       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                              6 minutes ago       Exited              kube-proxy                1                   de383b766fd62       kube-proxy-6msqw                            kube-system
	60c9b86442454       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                              6 minutes ago       Exited              etcd                      1                   57ae321d6c4eb       etcd-functional-555265                      kube-system
	b2f00d7219b6f       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                              6 minutes ago       Exited              kube-scheduler            1                   8807bc8b7445f       kube-scheduler-functional-555265            kube-system
	6ad1019c8cd0d       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                              6 minutes ago       Exited              kube-apiserver            1                   3da12b148c37c       kube-apiserver-functional-555265            kube-system
	849fc6d111bf2       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                              6 minutes ago       Exited              kube-controller-manager   1                   4cd4d68d20971       kube-controller-manager-functional-555265   kube-system
	
	
	==> coredns [43b0d0902e72f47336f33c933dae8ed41b926e930e26b6a762605786cf77799d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55475 - 56867 "HINFO IN 4242947312195469909.6593753392398065159. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018527293s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cfa48c80264df2b97af2e1413c8d5987e42bb5355a979c1ff41dbee180ece4dc] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51269 - 47457 "HINFO IN 594227927584252907.4755291191803679976. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.021447769s
	
	
	==> describe nodes <==
	Name:               functional-555265
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-555265
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=functional-555265
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T19_54_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 19:54:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-555265
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 20:02:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 19:59:42 +0000   Sun, 21 Dec 2025 19:54:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 19:59:42 +0000   Sun, 21 Dec 2025 19:54:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 19:59:42 +0000   Sun, 21 Dec 2025 19:54:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Dec 2025 19:59:42 +0000   Sun, 21 Dec 2025 19:54:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.15
	  Hostname:    functional-555265
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 60902d0788034adbb96223763b1bd603
	  System UUID:                60902d07-8803-4adb-b962-23763b1bd603
	  Boot ID:                    9981e399-6818-4c5b-9c69-aad85e4667bb
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-7kvw8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m40s
	  default                     hello-node-connect-7d85dfc575-xqzsp           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	  default                     mysql-6bcdcbc558-vtlfg                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    4m24s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 coredns-66bc5c9577-865k9                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m49s
	  kube-system                 etcd-functional-555265                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m55s
	  kube-system                 kube-apiserver-functional-555265              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-controller-manager-functional-555265     200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m55s
	  kube-system                 kube-proxy-6msqw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m49s
	  kube-system                 kube-scheduler-functional-555265              100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m55s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m47s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-rkjqf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6bshh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m47s                  kube-proxy       
	  Normal  Starting                 6m1s                   kube-proxy       
	  Normal  Starting                 6m46s                  kube-proxy       
	  Normal  Starting                 8m2s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m2s (x8 over 8m2s)    kubelet          Node functional-555265 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m2s (x8 over 8m2s)    kubelet          Node functional-555265 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m2s (x7 over 8m2s)    kubelet          Node functional-555265 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m2s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     7m55s                  kubelet          Node functional-555265 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m55s                  kubelet          Node functional-555265 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m55s                  kubelet          Node functional-555265 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m55s                  kubelet          Starting kubelet.
	  Normal  NodeReady                7m54s                  kubelet          Node functional-555265 status is now: NodeReady
	  Normal  RegisteredNode           7m50s                  node-controller  Node functional-555265 event: Registered Node functional-555265 in Controller
	  Normal  Starting                 6m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    6m51s (x8 over 6m51s)  kubelet          Node functional-555265 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  6m51s (x8 over 6m51s)  kubelet          Node functional-555265 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     6m51s (x7 over 6m51s)  kubelet          Node functional-555265 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m45s                  node-controller  Node functional-555265 event: Registered Node functional-555265 in Controller
	  Normal  Starting                 6m7s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m6s (x8 over 6m6s)    kubelet          Node functional-555265 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m6s (x8 over 6m6s)    kubelet          Node functional-555265 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m6s (x7 over 6m6s)    kubelet          Node functional-555265 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m                     node-controller  Node functional-555265 event: Registered Node functional-555265 in Controller
	
	
	==> dmesg <==
	[  +1.180929] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec21 19:54] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.093257] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.134391] kauditd_printk_skb: 171 callbacks suppressed
	[  +6.177188] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.028063] kauditd_printk_skb: 251 callbacks suppressed
	[Dec21 19:55] kauditd_printk_skb: 38 callbacks suppressed
	[  +6.048679] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.580769] kauditd_printk_skb: 176 callbacks suppressed
	[ +14.343810] kauditd_printk_skb: 137 callbacks suppressed
	[  +0.107703] kauditd_printk_skb: 12 callbacks suppressed
	[Dec21 19:56] kauditd_printk_skb: 78 callbacks suppressed
	[  +1.710641] kauditd_printk_skb: 313 callbacks suppressed
	[  +1.963614] kauditd_printk_skb: 31 callbacks suppressed
	[ +10.094241] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.000025] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.000150] kauditd_printk_skb: 104 callbacks suppressed
	[Dec21 19:57] kauditd_printk_skb: 26 callbacks suppressed
	[  +9.121861] kauditd_printk_skb: 11 callbacks suppressed
	[ +21.033024] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.998983] kauditd_printk_skb: 44 callbacks suppressed
	[  +6.343571] crun[9635]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.004269] kauditd_printk_skb: 109 callbacks suppressed
	
	
	==> etcd [60c9b86442454516dd6f80149e1e0346b5dab155158f067dee1c2aca6a381df2] <==
	{"level":"warn","ts":"2025-12-21T19:55:22.684366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T19:55:22.692810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T19:55:22.702842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T19:55:22.720370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T19:55:22.732581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T19:55:22.743692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T19:55:22.821600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53600","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-21T19:55:46.556663Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-21T19:55:46.570150Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-555265","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.15:2380"],"advertise-client-urls":["https://192.168.39.15:2379"]}
	{"level":"error","ts":"2025-12-21T19:55:46.570306Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-21T19:55:46.606414Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-21T19:55:46.606460Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-21T19:55:46.606482Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aadd773bb1fe5a6f","current-leader-member-id":"aadd773bb1fe5a6f"}
	{"level":"info","ts":"2025-12-21T19:55:46.606534Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-21T19:55:46.606522Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-21T19:55:46.606581Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-21T19:55:46.606648Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-21T19:55:46.606654Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-21T19:55:46.606687Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.15:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-21T19:55:46.606698Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.15:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-21T19:55:46.606703Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.15:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-21T19:55:46.611174Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.15:2380"}
	{"level":"error","ts":"2025-12-21T19:55:46.611270Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.15:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-21T19:55:46.611295Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.15:2380"}
	{"level":"info","ts":"2025-12-21T19:55:46.611300Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-555265","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.15:2380"],"advertise-client-urls":["https://192.168.39.15:2379"]}
	
	
	==> etcd [d6927835ec040fa0b73ee9b44006f2471e5cb0494c6bdb7dfa623cec1fbdca20] <==
	{"level":"warn","ts":"2025-12-21T19:59:23.811964Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-21T19:59:23.483163Z","time spent":"327.903904ms","remote":"127.0.0.1:50698","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":682,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-ml5pwud7pxnmy2shhssb6eyqta\" mod_revision:1033 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-ml5pwud7pxnmy2shhssb6eyqta\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-ml5pwud7pxnmy2shhssb6eyqta\" > >"}
	{"level":"info","ts":"2025-12-21T19:59:24.979086Z","caller":"traceutil/trace.go:172","msg":"trace[1239855945] linearizableReadLoop","detail":"{readStateIndex:1156; appliedIndex:1156; }","duration":"182.08538ms","start":"2025-12-21T19:59:24.796982Z","end":"2025-12-21T19:59:24.979067Z","steps":["trace[1239855945] 'read index received'  (duration: 182.080274ms)","trace[1239855945] 'applied index is now lower than readState.Index'  (duration: 4.243µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T19:59:24.979182Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.183888ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T19:59:24.979263Z","caller":"traceutil/trace.go:172","msg":"trace[2023882985] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1042; }","duration":"182.278444ms","start":"2025-12-21T19:59:24.796977Z","end":"2025-12-21T19:59:24.979256Z","steps":["trace[2023882985] 'agreement among raft nodes before linearized reading'  (duration: 182.159046ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T19:59:24.981413Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.356977ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T19:59:24.982172Z","caller":"traceutil/trace.go:172","msg":"trace[745743913] range","detail":"{range_begin:/registry/minions; range_end:; response_count:0; response_revision:1042; }","duration":"126.128824ms","start":"2025-12-21T19:59:24.856035Z","end":"2025-12-21T19:59:24.982164Z","steps":["trace[745743913] 'agreement among raft nodes before linearized reading'  (duration: 125.269966ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T19:59:24.982140Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.525125ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T19:59:24.982311Z","caller":"traceutil/trace.go:172","msg":"trace[1830318475] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1042; }","duration":"114.703477ms","start":"2025-12-21T19:59:24.867601Z","end":"2025-12-21T19:59:24.982305Z","steps":["trace[1830318475] 'agreement among raft nodes before linearized reading'  (duration: 114.507586ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:59:26.361421Z","caller":"traceutil/trace.go:172","msg":"trace[1741060984] linearizableReadLoop","detail":"{readStateIndex:1157; appliedIndex:1157; }","duration":"259.572303ms","start":"2025-12-21T19:59:26.101831Z","end":"2025-12-21T19:59:26.361403Z","steps":["trace[1741060984] 'read index received'  (duration: 259.567048ms)","trace[1741060984] 'applied index is now lower than readState.Index'  (duration: 4.382µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T19:59:26.361531Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"259.685448ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T19:59:26.361549Z","caller":"traceutil/trace.go:172","msg":"trace[1521009287] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1042; }","duration":"259.716282ms","start":"2025-12-21T19:59:26.101827Z","end":"2025-12-21T19:59:26.361543Z","steps":["trace[1521009287] 'agreement among raft nodes before linearized reading'  (duration: 259.654225ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:59:26.361615Z","caller":"traceutil/trace.go:172","msg":"trace[12104379] transaction","detail":"{read_only:false; response_revision:1043; number_of_response:1; }","duration":"293.406115ms","start":"2025-12-21T19:59:26.068199Z","end":"2025-12-21T19:59:26.361605Z","steps":["trace[12104379] 'process raft request'  (duration: 293.296687ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T19:59:26.361794Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.952615ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T19:59:26.361814Z","caller":"traceutil/trace.go:172","msg":"trace[251955942] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1043; }","duration":"120.018682ms","start":"2025-12-21T19:59:26.241790Z","end":"2025-12-21T19:59:26.361809Z","steps":["trace[251955942] 'agreement among raft nodes before linearized reading'  (duration: 119.937296ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T19:59:26.562805Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"190.059034ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T19:59:26.562849Z","caller":"traceutil/trace.go:172","msg":"trace[387661988] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1043; }","duration":"190.110577ms","start":"2025-12-21T19:59:26.372729Z","end":"2025-12-21T19:59:26.562839Z","steps":["trace[387661988] 'range keys from in-memory index tree'  (duration: 189.903679ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:59:28.551478Z","caller":"traceutil/trace.go:172","msg":"trace[1937164864] linearizableReadLoop","detail":"{readStateIndex:1170; appliedIndex:1170; }","duration":"134.904656ms","start":"2025-12-21T19:59:28.416554Z","end":"2025-12-21T19:59:28.551459Z","steps":["trace[1937164864] 'read index received'  (duration: 134.90076ms)","trace[1937164864] 'applied index is now lower than readState.Index'  (duration: 3.283µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-21T19:59:28.551649Z","caller":"traceutil/trace.go:172","msg":"trace[447065970] transaction","detail":"{read_only:false; response_revision:1055; number_of_response:1; }","duration":"174.143597ms","start":"2025-12-21T19:59:28.377493Z","end":"2025-12-21T19:59:28.551636Z","steps":["trace[447065970] 'process raft request'  (duration: 173.988091ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T19:59:28.551892Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.323286ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T19:59:28.553004Z","caller":"traceutil/trace.go:172","msg":"trace[64047494] range","detail":"{range_begin:/registry/controllerrevisions; range_end:; response_count:0; response_revision:1055; }","duration":"136.428404ms","start":"2025-12-21T19:59:28.416550Z","end":"2025-12-21T19:59:28.552978Z","steps":["trace[64047494] 'agreement among raft nodes before linearized reading'  (duration: 135.285827ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:59:32.892061Z","caller":"traceutil/trace.go:172","msg":"trace[1141842431] linearizableReadLoop","detail":"{readStateIndex:1174; appliedIndex:1174; }","duration":"244.333456ms","start":"2025-12-21T19:59:32.647701Z","end":"2025-12-21T19:59:32.892034Z","steps":["trace[1141842431] 'read index received'  (duration: 244.328317ms)","trace[1141842431] 'applied index is now lower than readState.Index'  (duration: 4.507µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-21T19:59:32.892168Z","caller":"traceutil/trace.go:172","msg":"trace[441244684] transaction","detail":"{read_only:false; response_revision:1058; number_of_response:1; }","duration":"370.687937ms","start":"2025-12-21T19:59:32.521469Z","end":"2025-12-21T19:59:32.892157Z","steps":["trace[441244684] 'process raft request'  (duration: 370.60248ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T19:59:32.892327Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"244.615817ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2025-12-21T19:59:32.892366Z","caller":"traceutil/trace.go:172","msg":"trace[140286787] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1058; }","duration":"244.663212ms","start":"2025-12-21T19:59:32.647696Z","end":"2025-12-21T19:59:32.892360Z","steps":["trace[140286787] 'agreement among raft nodes before linearized reading'  (duration: 244.507383ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T19:59:32.892326Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-21T19:59:32.521397Z","time spent":"370.831421ms","remote":"127.0.0.1:50698","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":557,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/functional-555265\" mod_revision:1040 > success:<request_put:<key:\"/registry/leases/kube-node-lease/functional-555265\" value_size:499 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/functional-555265\" > >"}
	
	
	==> kernel <==
	 20:02:11 up 8 min,  0 users,  load average: 0.36, 0.59, 0.35
	Linux functional-555265 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Dec 20 21:36:01 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [3618fd4b023093af252199fa1ee9bc75e4580afdcaa465101865015895a7bfca] <==
	I1221 19:56:08.197995       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1221 19:56:08.205470       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1221 19:56:08.253795       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1221 19:56:08.903944       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1221 19:56:09.033999       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1221 19:56:09.642988       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1221 19:56:09.684738       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1221 19:56:09.717435       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 19:56:09.726281       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1221 19:56:11.823539       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1221 19:56:11.869421       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 19:56:11.918334       1 controller.go:667] quota admission added evaluator for: endpoints
	I1221 19:56:26.182953       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.103.200.42"}
	I1221 19:56:30.941642       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.230.35"}
	I1221 19:56:31.095190       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.96.27.171"}
	I1221 19:57:11.048518       1 controller.go:667] quota admission added evaluator for: namespaces
	I1221 19:57:11.333186       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.17.227"}
	I1221 19:57:11.354120       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.101.108"}
	E1221 19:57:44.191120       1 conn.go:339] Error on socket receive: read tcp 192.168.39.15:8441->192.168.39.1:58910: use of closed network connection
	I1221 19:57:47.755141       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.67.126"}
	E1221 19:57:52.116918       1 conn.go:339] Error on socket receive: read tcp 192.168.39.15:8441->192.168.39.1:58954: use of closed network connection
	E1221 19:59:33.090274       1 conn.go:339] Error on socket receive: read tcp 192.168.39.15:8441->192.168.39.1:57072: use of closed network connection
	E1221 19:59:34.395895       1 conn.go:339] Error on socket receive: read tcp 192.168.39.15:8441->192.168.39.1:55768: use of closed network connection
	E1221 19:59:35.886744       1 conn.go:339] Error on socket receive: read tcp 192.168.39.15:8441->192.168.39.1:55774: use of closed network connection
	E1221 19:59:39.193302       1 conn.go:339] Error on socket receive: read tcp 192.168.39.15:8441->192.168.39.1:55782: use of closed network connection
	
	
	==> kube-apiserver [6ad1019c8cd0d09386ffad668538c9b8d6a910afe19274b9ad5775f3d7a4654e] <==
	I1221 19:55:25.769422       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 19:55:25.776128       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1221 19:55:26.980566       1 controller.go:667] quota admission added evaluator for: endpoints
	I1221 19:55:27.265176       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 19:55:27.319047       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1221 19:55:46.547624       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W1221 19:55:46.579881       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 19:55:46.579955       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 19:55:46.580015       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 19:55:46.580067       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 19:55:46.580141       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 19:55:46.580188       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 19:55:46.580252       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 19:55:46.580295       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 19:55:46.580334       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 19:55:46.580365       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 19:55:46.580399       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 19:55:46.580441       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 19:55:46.580490       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 19:55:46.580541       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 19:55:46.580579       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 19:55:46.580611       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 19:55:46.580644       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 19:55:46.580680       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 19:55:46.586272       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [627abc545b49b8208289fa65da1227e32b745e2e57d6b6dfa4c062d48fe015a0] <==
	I1221 19:56:11.542721       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1221 19:56:11.545722       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1221 19:56:11.550327       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1221 19:56:11.550374       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1221 19:56:11.552640       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1221 19:56:11.555045       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1221 19:56:11.555065       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1221 19:56:11.556200       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1221 19:56:11.559523       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1221 19:56:11.559590       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1221 19:56:11.561959       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1221 19:56:11.565969       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1221 19:56:11.565991       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1221 19:56:11.566246       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1221 19:56:11.567314       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1221 19:56:11.567392       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1221 19:56:11.572630       1 shared_informer.go:356] "Caches are synced" controller="service account"
	E1221 19:57:11.140700       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1221 19:57:11.157857       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1221 19:57:11.165990       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1221 19:57:11.171135       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1221 19:57:11.180331       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1221 19:57:11.186644       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1221 19:57:11.186842       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1221 19:57:11.196372       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [849fc6d111bf2907551b17ddfe5c82d104a82d9c3633e2c380cd6170812f0d2a] <==
	I1221 19:55:26.955786       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1221 19:55:26.959610       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1221 19:55:26.959703       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1221 19:55:26.959791       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-555265"
	I1221 19:55:26.959879       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1221 19:55:26.960036       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1221 19:55:26.960669       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1221 19:55:26.964328       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1221 19:55:26.964365       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1221 19:55:26.964379       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1221 19:55:26.964367       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1221 19:55:26.970195       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1221 19:55:26.970252       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1221 19:55:26.970321       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1221 19:55:26.972005       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1221 19:55:26.973206       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1221 19:55:26.978876       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1221 19:55:26.986107       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1221 19:55:26.994374       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1221 19:55:27.003782       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1221 19:55:27.011155       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1221 19:55:27.011209       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1221 19:55:27.011220       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1221 19:55:27.011143       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1221 19:55:27.012707       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	
	
	==> kube-proxy [04ad93c3ade023f9897a34c894de2761e50b1d46dd782786a3fa4e26fe0d5c26] <==
	I1221 19:55:24.821721       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1221 19:55:24.922846       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1221 19:55:24.922897       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.15"]
	E1221 19:55:24.922979       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 19:55:25.003791       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1221 19:55:25.003974       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1221 19:55:25.004001       1 server_linux.go:132] "Using iptables Proxier"
	I1221 19:55:25.020372       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 19:55:25.020730       1 server.go:527] "Version info" version="v1.34.3"
	I1221 19:55:25.020799       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 19:55:25.026690       1 config.go:200] "Starting service config controller"
	I1221 19:55:25.027568       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 19:55:25.027805       1 config.go:106] "Starting endpoint slice config controller"
	I1221 19:55:25.027900       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 19:55:25.027944       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 19:55:25.028066       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 19:55:25.029437       1 config.go:309] "Starting node config controller"
	I1221 19:55:25.029699       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 19:55:25.029706       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1221 19:55:25.128106       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1221 19:55:25.128246       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1221 19:55:25.132022       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [d0700344cab1d708d861280d6ff95b5fcb8eccac1b11fc6fb9126c3cca634339] <==
	I1221 19:56:09.574890       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1221 19:56:09.675718       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1221 19:56:09.675744       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.15"]
	E1221 19:56:09.675861       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 19:56:09.749805       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1221 19:56:09.749860       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1221 19:56:09.749890       1 server_linux.go:132] "Using iptables Proxier"
	I1221 19:56:09.759086       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 19:56:09.759318       1 server.go:527] "Version info" version="v1.34.3"
	I1221 19:56:09.759344       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 19:56:09.764063       1 config.go:200] "Starting service config controller"
	I1221 19:56:09.764097       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 19:56:09.764115       1 config.go:106] "Starting endpoint slice config controller"
	I1221 19:56:09.764118       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 19:56:09.764128       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 19:56:09.764132       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 19:56:09.766493       1 config.go:309] "Starting node config controller"
	I1221 19:56:09.766587       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 19:56:09.766610       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1221 19:56:09.864567       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1221 19:56:09.864570       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1221 19:56:09.864634       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b2f00d7219b6f59cd93bc3f041f22f383c452541d5aa45f345398df455e5d5ae] <==
	I1221 19:55:22.030802       1 serving.go:386] Generated self-signed cert in-memory
	I1221 19:55:23.757854       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1221 19:55:23.757935       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 19:55:23.764674       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1221 19:55:23.764698       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1221 19:55:23.764727       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 19:55:23.764732       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 19:55:23.764787       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1221 19:55:23.764795       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1221 19:55:23.765195       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1221 19:55:23.765250       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1221 19:55:23.865301       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1221 19:55:23.865632       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 19:55:23.865809       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1221 19:55:46.560491       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1221 19:55:46.560539       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1221 19:55:46.560563       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1221 19:55:46.560591       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1221 19:55:46.560629       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 19:55:46.560644       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1221 19:55:46.565515       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1221 19:55:46.565571       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e3b4ad98d7aea656a9426955636e2e91e4532d026421763639823360d60e635a] <==
	I1221 19:56:06.550559       1 serving.go:386] Generated self-signed cert in-memory
	W1221 19:56:08.078521       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1221 19:56:08.078552       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1221 19:56:08.078561       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1221 19:56:08.078566       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1221 19:56:08.170508       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1221 19:56:08.172825       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 19:56:08.181448       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 19:56:08.181509       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 19:56:08.183198       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1221 19:56:08.183251       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1221 19:56:08.281842       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 21 20:01:30 functional-555265 kubelet[6157]: E1221 20:01:30.915916    6157 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-7kvw8" podUID="66a9e732-72ed-4545-b4ec-fa351c75346f"
	Dec 21 20:01:33 functional-555265 kubelet[6157]: E1221 20:01:33.333578    6157 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 21 20:01:33 functional-555265 kubelet[6157]: E1221 20:01:33.333642    6157 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 21 20:01:33 functional-555265 kubelet[6157]: E1221 20:01:33.334034    6157 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-rkjqf_kubernetes-dashboard(9036337c-8add-445f-8351-77739fd1f3b4): ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 21 20:01:33 functional-555265 kubelet[6157]: E1221 20:01:33.334074    6157 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-rkjqf" podUID="9036337c-8add-445f-8351-77739fd1f3b4"
	Dec 21 20:01:35 functional-555265 kubelet[6157]: E1221 20:01:35.299962    6157 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766347295299437316  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:245802}  inodes_used:{value:115}}"
	Dec 21 20:01:35 functional-555265 kubelet[6157]: E1221 20:01:35.300006    6157 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766347295299437316  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:245802}  inodes_used:{value:115}}"
	Dec 21 20:01:45 functional-555265 kubelet[6157]: E1221 20:01:45.302953    6157 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766347305302423461  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:245802}  inodes_used:{value:115}}"
	Dec 21 20:01:45 functional-555265 kubelet[6157]: E1221 20:01:45.302999    6157 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766347305302423461  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:245802}  inodes_used:{value:115}}"
	Dec 21 20:01:47 functional-555265 kubelet[6157]: E1221 20:01:47.917183    6157 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-rkjqf" podUID="9036337c-8add-445f-8351-77739fd1f3b4"
	Dec 21 20:01:55 functional-555265 kubelet[6157]: E1221 20:01:55.305799    6157 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766347315305239496  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:245802}  inodes_used:{value:115}}"
	Dec 21 20:01:55 functional-555265 kubelet[6157]: E1221 20:01:55.306166    6157 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766347315305239496  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:245802}  inodes_used:{value:115}}"
	Dec 21 20:01:59 functional-555265 kubelet[6157]: E1221 20:01:59.917087    6157 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-rkjqf" podUID="9036337c-8add-445f-8351-77739fd1f3b4"
	Dec 21 20:02:03 functional-555265 kubelet[6157]: E1221 20:02:03.432068    6157 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 21 20:02:03 functional-555265 kubelet[6157]: E1221 20:02:03.432147    6157 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 21 20:02:03 functional-555265 kubelet[6157]: E1221 20:02:03.432365    6157 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-6bshh_kubernetes-dashboard(4f2ccf2f-93fe-4c20-a5ee-b29a88e5c880): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 21 20:02:03 functional-555265 kubelet[6157]: E1221 20:02:03.432402    6157 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6bshh" podUID="4f2ccf2f-93fe-4c20-a5ee-b29a88e5c880"
	Dec 21 20:02:05 functional-555265 kubelet[6157]: E1221 20:02:05.014311    6157 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod4360ac378721e5de4af5616b101eef03/crio-57ae321d6c4eb7dd1b895f22557c0b0cd333b8f0cd0795879843663573a9f929: Error finding container 57ae321d6c4eb7dd1b895f22557c0b0cd333b8f0cd0795879843663573a9f929: Status 404 returned error can't find the container with id 57ae321d6c4eb7dd1b895f22557c0b0cd333b8f0cd0795879843663573a9f929
	Dec 21 20:02:05 functional-555265 kubelet[6157]: E1221 20:02:05.015086    6157 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod708cee050b8a70424c8c67e5a60add8d/crio-8807bc8b7445f5651e77220c575ad6dca3efbae4dd96b56275d5fdcfbe6abb05: Error finding container 8807bc8b7445f5651e77220c575ad6dca3efbae4dd96b56275d5fdcfbe6abb05: Status 404 returned error can't find the container with id 8807bc8b7445f5651e77220c575ad6dca3efbae4dd96b56275d5fdcfbe6abb05
	Dec 21 20:02:05 functional-555265 kubelet[6157]: E1221 20:02:05.015453    6157 manager.go:1116] Failed to create existing container: /kubepods/burstable/podfb32730d461be579507ea361d141f6c6/crio-4cd4d68d2097141ef8d12ddac95a43ab9fdb1efe22a58f0be0f4b5681596c878: Error finding container 4cd4d68d2097141ef8d12ddac95a43ab9fdb1efe22a58f0be0f4b5681596c878: Status 404 returned error can't find the container with id 4cd4d68d2097141ef8d12ddac95a43ab9fdb1efe22a58f0be0f4b5681596c878
	Dec 21 20:02:05 functional-555265 kubelet[6157]: E1221 20:02:05.015834    6157 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod259daaed-1bac-4381-bb6d-f6b9d71b2fa2/crio-d2bdec683fec565edea26d07963974f82b75f50c3467ce03e8aad7f7c5da0875: Error finding container d2bdec683fec565edea26d07963974f82b75f50c3467ce03e8aad7f7c5da0875: Status 404 returned error can't find the container with id d2bdec683fec565edea26d07963974f82b75f50c3467ce03e8aad7f7c5da0875
	Dec 21 20:02:05 functional-555265 kubelet[6157]: E1221 20:02:05.016266    6157 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pode7b24cef-0b60-4cdf-945b-873716695f79/crio-de383b766fd62e53c249e3327edcc4a2f8ba9748bb597ad5f2ee3954c23a30f1: Error finding container de383b766fd62e53c249e3327edcc4a2f8ba9748bb597ad5f2ee3954c23a30f1: Status 404 returned error can't find the container with id de383b766fd62e53c249e3327edcc4a2f8ba9748bb597ad5f2ee3954c23a30f1
	Dec 21 20:02:05 functional-555265 kubelet[6157]: E1221 20:02:05.308658    6157 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766347325308038953  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:245802}  inodes_used:{value:115}}"
	Dec 21 20:02:05 functional-555265 kubelet[6157]: E1221 20:02:05.308726    6157 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766347325308038953  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:245802}  inodes_used:{value:115}}"
	Dec 21 20:02:10 functional-555265 kubelet[6157]: E1221 20:02:10.917075    6157 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-rkjqf" podUID="9036337c-8add-445f-8351-77739fd1f3b4"
	
	
	==> storage-provisioner [2c42a0ed94a481088e2cdca6dcca665b78adfc77f7292aac70a766d0ce90d4fa] <==
	W1221 20:01:47.720423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:01:49.724005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:01:49.728982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:01:51.732796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:01:51.738966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:01:53.742825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:01:53.751247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:01:55.754452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:01:55.760645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:01:57.764653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:01:57.773502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:01:59.777000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:01:59.782486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:02:01.786141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:02:01.791414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:02:03.795112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:02:03.800133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:02:05.803901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:02:05.813743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:02:07.816618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:02:07.825706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:02:09.829931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:02:09.835068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:02:11.838730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:02:11.844275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [952aa0f304e02b8cf78944e2ab9460db0849e8c0684b73279828f0e3972012ac] <==
	I1221 19:56:02.173320       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1221 19:56:02.202529       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-555265 -n functional-555265
helpers_test.go:270: (dbg) Run:  kubectl --context functional-555265 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-75c85bcc94-7kvw8 dashboard-metrics-scraper-77bf4d6c4c-rkjqf kubernetes-dashboard-855c9754f9-6bshh
helpers_test.go:283: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-555265 describe pod busybox-mount hello-node-75c85bcc94-7kvw8 dashboard-metrics-scraper-77bf4d6c4c-rkjqf kubernetes-dashboard-855c9754f9-6bshh
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-555265 describe pod busybox-mount hello-node-75c85bcc94-7kvw8 dashboard-metrics-scraper-77bf4d6c4c-rkjqf kubernetes-dashboard-855c9754f9-6bshh: exit status 1 (73.526889ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-555265/192.168.39.15
	Start Time:       Sun, 21 Dec 2025 19:56:33 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://77b28b936f62d149b541872ecafded46e7355dd1992d6eb716758ffe1ae92666
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 21 Dec 2025 19:57:34 +0000
	      Finished:     Sun, 21 Dec 2025 19:57:34 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6qtbw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-6qtbw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m38s  default-scheduler  Successfully assigned default/busybox-mount to functional-555265
	  Normal  Pulling    5m38s  kubelet            spec.containers{mount-munger}: Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m38s  kubelet            spec.containers{mount-munger}: Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.295s (1m0.45s including waiting). Image size: 4631262 bytes.
	  Normal  Created    4m38s  kubelet            spec.containers{mount-munger}: Created container: mount-munger
	  Normal  Started    4m38s  kubelet            spec.containers{mount-munger}: Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-7kvw8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-555265/192.168.39.15
	Start Time:       Sun, 21 Dec 2025 19:56:31 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6wfsh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6wfsh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m41s                default-scheduler  Successfully assigned default/hello-node-75c85bcc94-7kvw8 to functional-555265
	  Warning  Failed     4m40s                kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     69s (x3 over 4m40s)  kubelet            spec.containers{echo-server}: Error: ErrImagePull
	  Warning  Failed     69s (x2 over 2m59s)  kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    42s (x4 over 4m40s)  kubelet            spec.containers{echo-server}: Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     42s (x4 over 4m40s)  kubelet            spec.containers{echo-server}: Error: ImagePullBackOff
	  Normal   Pulling    27s (x4 over 5m41s)  kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-rkjqf" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-6bshh" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-555265 describe pod busybox-mount hello-node-75c85bcc94-7kvw8 dashboard-metrics-scraper-77bf4d6c4c-rkjqf kubernetes-dashboard-855c9754f9-6bshh: exit status 1
E1221 20:03:33.673132  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/DashboardCmd (302.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-555265 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-555265 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-7kvw8" [66a9e732-72ed-4545-b4ec-fa351c75346f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:338: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-555265 -n functional-555265
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-21 20:06:31.342362229 +0000 UTC m=+1220.180154077
functional_test.go:1460: (dbg) Run:  kubectl --context functional-555265 describe po hello-node-75c85bcc94-7kvw8 -n default
functional_test.go:1460: (dbg) kubectl --context functional-555265 describe po hello-node-75c85bcc94-7kvw8 -n default:
Name:             hello-node-75c85bcc94-7kvw8
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-555265/192.168.39.15
Start Time:       Sun, 21 Dec 2025 19:56:31 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6wfsh (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6wfsh:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-7kvw8 to functional-555265
Warning  Failed     8m59s                kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    118s (x5 over 10m)   kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"
Warning  Failed     87s (x5 over 8m59s)  kubelet            spec.containers{echo-server}: Error: ErrImagePull
Warning  Failed     87s (x4 over 7m18s)  kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    6s (x16 over 8m59s)  kubelet            spec.containers{echo-server}: Back-off pulling image "kicbase/echo-server"
Warning  Failed     6s (x16 over 8m59s)  kubelet            spec.containers{echo-server}: Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-555265 logs hello-node-75c85bcc94-7kvw8 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-555265 logs hello-node-75c85bcc94-7kvw8 -n default: exit status 1 (73.316859ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-7kvw8" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-555265 logs hello-node-75c85bcc94-7kvw8 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-555265 service --namespace=default --https --url hello-node: exit status 115 (244.32847ms)

                                                
                                                
-- stdout --
	https://192.168.39.15:32281
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-555265 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-555265 service hello-node --url --format={{.IP}}: exit status 115 (249.900257ms)

                                                
                                                
-- stdout --
	192.168.39.15
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-555265 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-555265 service hello-node --url: exit status 115 (239.056705ms)

                                                
                                                
-- stdout --
	http://192.168.39.15:32281
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-555265 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.15:32281
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (3.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-089730 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-089730 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-089730 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-089730 --alsologtostderr -v=1] stderr:
I1221 20:10:48.994840  137641 out.go:360] Setting OutFile to fd 1 ...
I1221 20:10:48.995117  137641 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 20:10:48.995128  137641 out.go:374] Setting ErrFile to fd 2...
I1221 20:10:48.995131  137641 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 20:10:48.995342  137641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
I1221 20:10:48.995606  137641 mustload.go:66] Loading cluster: functional-089730
I1221 20:10:48.996005  137641 config.go:182] Loaded profile config "functional-089730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1221 20:10:48.998192  137641 host.go:66] Checking if "functional-089730" exists ...
I1221 20:10:48.998372  137641 api_server.go:166] Checking apiserver status ...
I1221 20:10:48.998426  137641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1221 20:10:49.000970  137641 main.go:144] libmachine: domain functional-089730 has defined MAC address 52:54:00:6a:61:1e in network mk-functional-089730
I1221 20:10:49.001381  137641 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:61:1e", ip: ""} in network mk-functional-089730: {Iface:virbr1 ExpiryTime:2025-12-21 21:06:53 +0000 UTC Type:0 Mac:52:54:00:6a:61:1e Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:functional-089730 Clientid:01:52:54:00:6a:61:1e}
I1221 20:10:49.001407  137641 main.go:144] libmachine: domain functional-089730 has defined IP address 192.168.39.143 and MAC address 52:54:00:6a:61:1e in network mk-functional-089730
I1221 20:10:49.001587  137641 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/functional-089730/id_rsa Username:docker}
I1221 20:10:49.109853  137641 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6680/cgroup
W1221 20:10:49.122639  137641 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6680/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1221 20:10:49.122714  137641 ssh_runner.go:195] Run: ls
I1221 20:10:49.127804  137641 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8441/healthz ...
I1221 20:10:49.133288  137641 api_server.go:279] https://192.168.39.143:8441/healthz returned 200:
ok
W1221 20:10:49.133340  137641 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1221 20:10:49.133498  137641 config.go:182] Loaded profile config "functional-089730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1221 20:10:49.133520  137641 addons.go:70] Setting dashboard=true in profile "functional-089730"
I1221 20:10:49.133533  137641 addons.go:239] Setting addon dashboard=true in "functional-089730"
I1221 20:10:49.133556  137641 host.go:66] Checking if "functional-089730" exists ...
I1221 20:10:49.137015  137641 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1221 20:10:49.138399  137641 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1221 20:10:49.139994  137641 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1221 20:10:49.140012  137641 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1221 20:10:49.142455  137641 main.go:144] libmachine: domain functional-089730 has defined MAC address 52:54:00:6a:61:1e in network mk-functional-089730
I1221 20:10:49.142859  137641 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:61:1e", ip: ""} in network mk-functional-089730: {Iface:virbr1 ExpiryTime:2025-12-21 21:06:53 +0000 UTC Type:0 Mac:52:54:00:6a:61:1e Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:functional-089730 Clientid:01:52:54:00:6a:61:1e}
I1221 20:10:49.142881  137641 main.go:144] libmachine: domain functional-089730 has defined IP address 192.168.39.143 and MAC address 52:54:00:6a:61:1e in network mk-functional-089730
I1221 20:10:49.143052  137641 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/functional-089730/id_rsa Username:docker}
I1221 20:10:49.250273  137641 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1221 20:10:49.250310  137641 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1221 20:10:49.278024  137641 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1221 20:10:49.278051  137641 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1221 20:10:49.298438  137641 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1221 20:10:49.298462  137641 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1221 20:10:49.325020  137641 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1221 20:10:49.325044  137641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1221 20:10:49.346667  137641 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1221 20:10:49.346700  137641 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1221 20:10:49.369845  137641 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1221 20:10:49.369871  137641 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1221 20:10:49.391250  137641 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1221 20:10:49.391279  137641 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1221 20:10:49.414566  137641 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1221 20:10:49.414597  137641 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1221 20:10:49.435862  137641 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1221 20:10:49.435890  137641 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1221 20:10:49.458070  137641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1221 20:10:50.148636  137641 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-089730 addons enable metrics-server

                                                
                                                
I1221 20:10:50.150040  137641 addons.go:202] Writing out "functional-089730" config to set dashboard=true...
W1221 20:10:50.150404  137641 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1221 20:10:50.151436  137641 kapi.go:59] client config for functional-089730: &rest.Config{Host:"https://192.168.39.143:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.key", CAFile:"/home/jenkins/minikube-integration/22179-122429/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2867280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1221 20:10:50.152057  137641 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1221 20:10:50.152081  137641 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1221 20:10:50.152088  137641 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1221 20:10:50.152097  137641 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1221 20:10:50.152104  137641 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1221 20:10:50.160655  137641 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  1a85ed2e-5eac-47a5-8da2-733064866a39 918 0 2025-12-21 20:10:50 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-21 20:10:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.104.47.52,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.104.47.52],IPFamilies:[IPv4],AllocateLoadBalancerN
odePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1221 20:10:50.160796  137641 out.go:285] * Launching proxy ...
* Launching proxy ...
I1221 20:10:50.160863  137641 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-089730 proxy --port 36195]
I1221 20:10:50.161225  137641 dashboard.go:159] Waiting for kubectl to output host:port ...
I1221 20:10:50.209492  137641 out.go:203] 
W1221 20:10:50.210642  137641 out.go:285] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
W1221 20:10:50.210663  137641 out.go:285] * 
* 
W1221 20:10:50.216067  137641 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1221 20:10:50.217547  137641 out.go:203] 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-089730 -n functional-089730
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-089730 logs -n 25: (1.374361312s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-089730 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:09 UTC │                     │
	│ mount     │ -p functional-089730 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1488401894/001:/mount-9p --alsologtostderr -v=1              │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:09 UTC │                     │
	│ ssh       │ functional-089730 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:09 UTC │ 21 Dec 25 20:09 UTC │
	│ ssh       │ functional-089730 ssh -- ls -la /mount-9p                                                                                                           │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:09 UTC │ 21 Dec 25 20:09 UTC │
	│ ssh       │ functional-089730 ssh cat /mount-9p/test-1766347782880052427                                                                                        │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:09 UTC │ 21 Dec 25 20:09 UTC │
	│ ssh       │ functional-089730 ssh stat /mount-9p/created-by-test                                                                                                │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │ 21 Dec 25 20:10 UTC │
	│ ssh       │ functional-089730 ssh stat /mount-9p/created-by-pod                                                                                                 │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │ 21 Dec 25 20:10 UTC │
	│ ssh       │ functional-089730 ssh sudo umount -f /mount-9p                                                                                                      │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │ 21 Dec 25 20:10 UTC │
	│ ssh       │ functional-089730 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │                     │
	│ mount     │ -p functional-089730 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1363950450/001:/mount-9p --alsologtostderr -v=1 --port 32807 │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │                     │
	│ ssh       │ functional-089730 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │ 21 Dec 25 20:10 UTC │
	│ ssh       │ functional-089730 ssh -- ls -la /mount-9p                                                                                                           │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │ 21 Dec 25 20:10 UTC │
	│ ssh       │ functional-089730 ssh sudo umount -f /mount-9p                                                                                                      │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │                     │
	│ mount     │ -p functional-089730 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1947649763/001:/mount1 --alsologtostderr -v=1                │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │                     │
	│ mount     │ -p functional-089730 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1947649763/001:/mount2 --alsologtostderr -v=1                │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │                     │
	│ ssh       │ functional-089730 ssh findmnt -T /mount1                                                                                                            │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │                     │
	│ mount     │ -p functional-089730 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1947649763/001:/mount3 --alsologtostderr -v=1                │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │                     │
	│ ssh       │ functional-089730 ssh findmnt -T /mount1                                                                                                            │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │ 21 Dec 25 20:10 UTC │
	│ ssh       │ functional-089730 ssh findmnt -T /mount2                                                                                                            │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │ 21 Dec 25 20:10 UTC │
	│ ssh       │ functional-089730 ssh findmnt -T /mount3                                                                                                            │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │ 21 Dec 25 20:10 UTC │
	│ mount     │ -p functional-089730 --kill=true                                                                                                                    │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │                     │
	│ start     │ -p functional-089730 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1           │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │                     │
	│ start     │ -p functional-089730 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                     │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │                     │
	│ start     │ -p functional-089730 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1           │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-089730 --alsologtostderr -v=1                                                                                      │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 20:10:48
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 20:10:48.874776  137625 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:10:48.874961  137625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:10:48.874974  137625 out.go:374] Setting ErrFile to fd 2...
	I1221 20:10:48.874980  137625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:10:48.875432  137625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
	I1221 20:10:48.876089  137625 out.go:368] Setting JSON to false
	I1221 20:10:48.877361  137625 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":13999,"bootTime":1766333850,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 20:10:48.877445  137625 start.go:143] virtualization: kvm guest
	I1221 20:10:48.879534  137625 out.go:179] * [functional-089730] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1221 20:10:48.881061  137625 notify.go:221] Checking for updates...
	I1221 20:10:48.881083  137625 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 20:10:48.882524  137625 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 20:10:48.884057  137625 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	I1221 20:10:48.885365  137625 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	I1221 20:10:48.886412  137625 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 20:10:48.887645  137625 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 20:10:48.889464  137625 config.go:182] Loaded profile config "functional-089730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:10:48.890237  137625 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 20:10:48.921546  137625 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1221 20:10:48.922904  137625 start.go:309] selected driver: kvm2
	I1221 20:10:48.922919  137625 start.go:928] validating driver "kvm2" against &{Name:functional-089730 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-089730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.143 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:10:48.923086  137625 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 20:10:48.925076  137625 out.go:203] 
	W1221 20:10:48.926169  137625 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1221 20:10:48.927191  137625 out.go:203] 
	
	
	==> CRI-O <==
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.056174112Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766347851056146746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:216501,},InodesUsed:&UInt64Value{Value:97,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76161935-f0e1-4928-8b12-65bae4b4fe3f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.057075650Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=097f6553-f6e9-4f0a-a123-91a88961f7f3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.057142320Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=097f6553-f6e9-4f0a-a123-91a88961f7f3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.057553999Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6b81a5792ff6b7f3b9bf957df63be2ccdf7682d1abe17ae0a825b2b0b6bebb2,PodSandboxId:e3ed1ff78915a28d56fa5c1e2f498a05cd7070952c15c30d517e4938ccf97710,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766347848968691685,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e23cc32d-4ece-469e-ab30-b9d6da91c272,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c98b5b625233e8a9da1bf14b1c6bbadf84fac4c441dfe39fdae8f01d350ab86,PodSandboxId:fef0553634746934eb4e4ff3c366cc9f4e8787f5a1466656636ce1a0311a8dc3,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766347842793026855,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 23efd88c-e136-4f52-9ec1-1a751b7895ba,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f33f3b6125f3dac9718e2b27544392953a029ec5eae9e1503b43fc9fad78bdbc,PodSandboxId:7d9e0b536cc6fb59f120bc00735e8efa24289a00352d99f2ae2e20697b5fff26,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1766347762079286320,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-9r6m2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 289487f7-0d17-49ed-81be-8171c3228316,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbe91e48af3a0dc3c616569835aa1492708bc166db654a923e85b13b49c3c33,PodSandboxId:11c5b1a74e282fbfcc5fae1bf73c301a149992fcabb1c7f0af6339188cb39f8d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1766347722486741569,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6smpp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03096d0a-24b6-4db9-a07d-3ca48f199450,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30476b1b24d7e055027c37f6876dd2b3a7167af34d2bc5f1496846ec0d3fbf1f,PodSandboxId:8b268f461c72e326d21f2e421ad6ac1506bf8219c5e8ab91c78bfddfbc705a48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766347722525237366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4291be3-1c09-465a-9574-d7d70f9846bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d88ac7ce6060714f83c1c22c5fdcfe49c4361814cdea91996f3475738b56891c,PodSandboxId:4de5a4cac22709d4293cbf620cd5e3211e07f007442453a34d5720573ecead24,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1766347722500475096,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ntzl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e29723-ca1b-4f92-b6a2-a1679d5a2816,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f929c086126ff2686e21af271eb411c613ada14b105a0acd309a6a6704738e88,PodSandboxId:04cd4698199fb884dc7c2c08c7fba8ed8fcaa6c4d94709914283d89615fde58d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1766347719111025395,Lab
els:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9bacad46b021ae398e328e4a825a460,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4198aac42042dc20d4cf3101e76a1d787833f1041837c2f44bed919ddc3884da,PodSandboxId:ef8159a89b5433a23ce86fe569ca161f0107998bab99f24823299883715c1bd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a
56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1766347718886290305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994b1850d250d98450256a10cffe058b,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a65f0a39880a355c99a7a6703a4a8be4b27a1964d47ee979fdaac8e44fe37dbf,PodSandboxId:e6cf51b2360451f224250a6252d186331dae473ccbb7e2ab17dec793d7df73bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d50120
7f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1766347718859625032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347c3e9dc7e9e588dc5fd1feb68add2f,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89341fc9df43d6401a41e0c78827ece223622180ce5a56c9f2c9aba7b025ed83,PodSandboxId:551acf1af3dc1a187b7add1afcf76c2227a81dc9eb2f8bc2e4a75b6bc89
cd9a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1766347718826465277,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50c2bfc932143776fe5aeab49800dea8,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135b8186acfdfbe3d6f2ba492af723f3eb852911f6ea92f27337224f1f0
8ce98,PodSandboxId:49f246cdc6803065b80908875edf15d61deb53669a4a3d1aee2633163fb2ebba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766347687948902995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4291be3-1c09-465a-9574-d7d70f9846bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da242b28631cc30a2b54ade27c63a355c8f1c2c36743ca5b04c900baae24111a,PodSand
boxId:b9c953e28fbfe93be0b4448ca15a52a23a3aac3d4f4c4ebfe1dfaacf9696f3b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1766347687976488684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ntzl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e29723-ca1b-4f92-b6a2-a1679d5a2816,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"
containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:304c649f30afc1929e74ea3be4e66b048fee5d7cf6cbf1c0b3f467d95f8bdb32,PodSandboxId:80613639237f2962194d8a35562ca1ac9ac0e5f68531ce2077f471ea5b7e2e14,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1766347683335999263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6smpp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03096d0a-24b6-4db9-a07d-3ca48f199450,},Annotations:map[string]string{io.kubernetes.container.hash:
d6e0e1a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b9a10432063f6dafdfb270a0dcb791706dac472b675f79994e50ae8dc7f25e,PodSandboxId:5a95cd1a128c8df2d934a295e2f9e956cc9c7309ec9cec1f91565d6efd3843be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1766347683284473882,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994b1850d250d98450256a10cffe058b,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5533b53ecc8fa1d55a8620ddb265f19587a3ec2a4dd79ea29d2caf054b90bf0,PodSandboxId:78470c1e28a0ce52d630fecbe35e9fd98d89e5eedb5b63cd55007930068a4a7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1766347683240390326,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50
c2bfc932143776fe5aeab49800dea8,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c885fa9516caf6fc0fd470ba7fee82b76c4f0bd2ce66620d319bdadad89f6399,PodSandboxId:e28604768148b056f8e3b02f4cb80458c2adfa189a84d9ccd28ad83fa445d8ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1766347683146173787,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.po
d.name: kube-scheduler-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347c3e9dc7e9e588dc5fd1feb68add2f,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=097f6553-f6e9-4f0a-a123-91a88961f7f3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.097563045Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=42226119-04c3-40e5-9adb-e8dafc4283e4 name=/runtime.v1.RuntimeService/Version
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.097652825Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=42226119-04c3-40e5-9adb-e8dafc4283e4 name=/runtime.v1.RuntimeService/Version
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.099011330Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=66c09946-40c5-4b99-8865-02ec73fe3083 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.099662007Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766347851099639910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:216501,},InodesUsed:&UInt64Value{Value:97,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=66c09946-40c5-4b99-8865-02ec73fe3083 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.100633159Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41fc987e-def5-434b-87f0-d16a73ef8ddb name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.100688677Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41fc987e-def5-434b-87f0-d16a73ef8ddb name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.101668653Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6b81a5792ff6b7f3b9bf957df63be2ccdf7682d1abe17ae0a825b2b0b6bebb2,PodSandboxId:e3ed1ff78915a28d56fa5c1e2f498a05cd7070952c15c30d517e4938ccf97710,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766347848968691685,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e23cc32d-4ece-469e-ab30-b9d6da91c272,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c98b5b625233e8a9da1bf14b1c6bbadf84fac4c441dfe39fdae8f01d350ab86,PodSandboxId:fef0553634746934eb4e4ff3c366cc9f4e8787f5a1466656636ce1a0311a8dc3,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766347842793026855,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 23efd88c-e136-4f52-9ec1-1a751b7895ba,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f33f3b6125f3dac9718e2b27544392953a029ec5eae9e1503b43fc9fad78bdbc,PodSandboxId:7d9e0b536cc6fb59f120bc00735e8efa24289a00352d99f2ae2e20697b5fff26,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1766347762079286320,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-9r6m2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 289487f7-0d17-49ed-81be-8171c3228316,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbe91e48af3a0dc3c616569835aa1492708bc166db654a923e85b13b49c3c33,PodSandboxId:11c5b1a74e282fbfcc5fae1bf73c301a149992fcabb1c7f0af6339188cb39f8d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1766347722486741569,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6smpp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03096d0a-24b6-4db9-a07d-3ca48f199450,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30476b1b24d7e055027c37f6876dd2b3a7167af34d2bc5f1496846ec0d3fbf1f,PodSandboxId:8b268f461c72e326d21f2e421ad6ac1506bf8219c5e8ab91c78bfddfbc705a48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766347722525237366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4291be3-1c09-465a-9574-d7d70f9846bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d88ac7ce6060714f83c1c22c5fdcfe49c4361814cdea91996f3475738b56891c,PodSandboxId:4de5a4cac22709d4293cbf620cd5e3211e07f007442453a34d5720573ecead24,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1766347722500475096,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ntzl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e29723-ca1b-4f92-b6a2-a1679d5a2816,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f929c086126ff2686e21af271eb411c613ada14b105a0acd309a6a6704738e88,PodSandboxId:04cd4698199fb884dc7c2c08c7fba8ed8fcaa6c4d94709914283d89615fde58d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1766347719111025395,Lab
els:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9bacad46b021ae398e328e4a825a460,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4198aac42042dc20d4cf3101e76a1d787833f1041837c2f44bed919ddc3884da,PodSandboxId:ef8159a89b5433a23ce86fe569ca161f0107998bab99f24823299883715c1bd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a
56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1766347718886290305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994b1850d250d98450256a10cffe058b,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a65f0a39880a355c99a7a6703a4a8be4b27a1964d47ee979fdaac8e44fe37dbf,PodSandboxId:e6cf51b2360451f224250a6252d186331dae473ccbb7e2ab17dec793d7df73bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d50120
7f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1766347718859625032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347c3e9dc7e9e588dc5fd1feb68add2f,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89341fc9df43d6401a41e0c78827ece223622180ce5a56c9f2c9aba7b025ed83,PodSandboxId:551acf1af3dc1a187b7add1afcf76c2227a81dc9eb2f8bc2e4a75b6bc89
cd9a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1766347718826465277,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50c2bfc932143776fe5aeab49800dea8,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135b8186acfdfbe3d6f2ba492af723f3eb852911f6ea92f27337224f1f0
8ce98,PodSandboxId:49f246cdc6803065b80908875edf15d61deb53669a4a3d1aee2633163fb2ebba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766347687948902995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4291be3-1c09-465a-9574-d7d70f9846bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da242b28631cc30a2b54ade27c63a355c8f1c2c36743ca5b04c900baae24111a,PodSand
boxId:b9c953e28fbfe93be0b4448ca15a52a23a3aac3d4f4c4ebfe1dfaacf9696f3b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1766347687976488684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ntzl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e29723-ca1b-4f92-b6a2-a1679d5a2816,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"
containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:304c649f30afc1929e74ea3be4e66b048fee5d7cf6cbf1c0b3f467d95f8bdb32,PodSandboxId:80613639237f2962194d8a35562ca1ac9ac0e5f68531ce2077f471ea5b7e2e14,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1766347683335999263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6smpp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03096d0a-24b6-4db9-a07d-3ca48f199450,},Annotations:map[string]string{io.kubernetes.container.hash:
d6e0e1a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b9a10432063f6dafdfb270a0dcb791706dac472b675f79994e50ae8dc7f25e,PodSandboxId:5a95cd1a128c8df2d934a295e2f9e956cc9c7309ec9cec1f91565d6efd3843be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1766347683284473882,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994b1850d250d98450256a10cffe058b,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5533b53ecc8fa1d55a8620ddb265f19587a3ec2a4dd79ea29d2caf054b90bf0,PodSandboxId:78470c1e28a0ce52d630fecbe35e9fd98d89e5eedb5b63cd55007930068a4a7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1766347683240390326,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50
c2bfc932143776fe5aeab49800dea8,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c885fa9516caf6fc0fd470ba7fee82b76c4f0bd2ce66620d319bdadad89f6399,PodSandboxId:e28604768148b056f8e3b02f4cb80458c2adfa189a84d9ccd28ad83fa445d8ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1766347683146173787,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.po
d.name: kube-scheduler-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347c3e9dc7e9e588dc5fd1feb68add2f,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41fc987e-def5-434b-87f0-d16a73ef8ddb name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.132719831Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f45162f-b170-4fd2-b373-472cdf475e24 name=/runtime.v1.RuntimeService/Version
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.132996804Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f45162f-b170-4fd2-b373-472cdf475e24 name=/runtime.v1.RuntimeService/Version
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.134143503Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=deced5b9-5551-42b8-bb07-8c5f7f046d0d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.135465663Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766347851135386385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:216501,},InodesUsed:&UInt64Value{Value:97,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=deced5b9-5551-42b8-bb07-8c5f7f046d0d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.136332138Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7e08d5b-9337-44a8-8353-e5dfdc25aa08 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.136385694Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7e08d5b-9337-44a8-8353-e5dfdc25aa08 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.136768834Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6b81a5792ff6b7f3b9bf957df63be2ccdf7682d1abe17ae0a825b2b0b6bebb2,PodSandboxId:e3ed1ff78915a28d56fa5c1e2f498a05cd7070952c15c30d517e4938ccf97710,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766347848968691685,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e23cc32d-4ece-469e-ab30-b9d6da91c272,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c98b5b625233e8a9da1bf14b1c6bbadf84fac4c441dfe39fdae8f01d350ab86,PodSandboxId:fef0553634746934eb4e4ff3c366cc9f4e8787f5a1466656636ce1a0311a8dc3,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766347842793026855,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 23efd88c-e136-4f52-9ec1-1a751b7895ba,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f33f3b6125f3dac9718e2b27544392953a029ec5eae9e1503b43fc9fad78bdbc,PodSandboxId:7d9e0b536cc6fb59f120bc00735e8efa24289a00352d99f2ae2e20697b5fff26,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1766347762079286320,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-9r6m2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 289487f7-0d17-49ed-81be-8171c3228316,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbe91e48af3a0dc3c616569835aa1492708bc166db654a923e85b13b49c3c33,PodSandboxId:11c5b1a74e282fbfcc5fae1bf73c301a149992fcabb1c7f0af6339188cb39f8d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1766347722486741569,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6smpp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03096d0a-24b6-4db9-a07d-3ca48f199450,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30476b1b24d7e055027c37f6876dd2b3a7167af34d2bc5f1496846ec0d3fbf1f,PodSandboxId:8b268f461c72e326d21f2e421ad6ac1506bf8219c5e8ab91c78bfddfbc705a48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766347722525237366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4291be3-1c09-465a-9574-d7d70f9846bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d88ac7ce6060714f83c1c22c5fdcfe49c4361814cdea91996f3475738b56891c,PodSandboxId:4de5a4cac22709d4293cbf620cd5e3211e07f007442453a34d5720573ecead24,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1766347722500475096,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ntzl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e29723-ca1b-4f92-b6a2-a1679d5a2816,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f929c086126ff2686e21af271eb411c613ada14b105a0acd309a6a6704738e88,PodSandboxId:04cd4698199fb884dc7c2c08c7fba8ed8fcaa6c4d94709914283d89615fde58d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1766347719111025395,Lab
els:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9bacad46b021ae398e328e4a825a460,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4198aac42042dc20d4cf3101e76a1d787833f1041837c2f44bed919ddc3884da,PodSandboxId:ef8159a89b5433a23ce86fe569ca161f0107998bab99f24823299883715c1bd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a
56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1766347718886290305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994b1850d250d98450256a10cffe058b,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a65f0a39880a355c99a7a6703a4a8be4b27a1964d47ee979fdaac8e44fe37dbf,PodSandboxId:e6cf51b2360451f224250a6252d186331dae473ccbb7e2ab17dec793d7df73bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d50120
7f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1766347718859625032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347c3e9dc7e9e588dc5fd1feb68add2f,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89341fc9df43d6401a41e0c78827ece223622180ce5a56c9f2c9aba7b025ed83,PodSandboxId:551acf1af3dc1a187b7add1afcf76c2227a81dc9eb2f8bc2e4a75b6bc89
cd9a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1766347718826465277,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50c2bfc932143776fe5aeab49800dea8,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135b8186acfdfbe3d6f2ba492af723f3eb852911f6ea92f27337224f1f0
8ce98,PodSandboxId:49f246cdc6803065b80908875edf15d61deb53669a4a3d1aee2633163fb2ebba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766347687948902995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4291be3-1c09-465a-9574-d7d70f9846bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da242b28631cc30a2b54ade27c63a355c8f1c2c36743ca5b04c900baae24111a,PodSand
boxId:b9c953e28fbfe93be0b4448ca15a52a23a3aac3d4f4c4ebfe1dfaacf9696f3b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1766347687976488684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ntzl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e29723-ca1b-4f92-b6a2-a1679d5a2816,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"
containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:304c649f30afc1929e74ea3be4e66b048fee5d7cf6cbf1c0b3f467d95f8bdb32,PodSandboxId:80613639237f2962194d8a35562ca1ac9ac0e5f68531ce2077f471ea5b7e2e14,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1766347683335999263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6smpp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03096d0a-24b6-4db9-a07d-3ca48f199450,},Annotations:map[string]string{io.kubernetes.container.hash:
d6e0e1a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b9a10432063f6dafdfb270a0dcb791706dac472b675f79994e50ae8dc7f25e,PodSandboxId:5a95cd1a128c8df2d934a295e2f9e956cc9c7309ec9cec1f91565d6efd3843be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1766347683284473882,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994b1850d250d98450256a10cffe058b,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5533b53ecc8fa1d55a8620ddb265f19587a3ec2a4dd79ea29d2caf054b90bf0,PodSandboxId:78470c1e28a0ce52d630fecbe35e9fd98d89e5eedb5b63cd55007930068a4a7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1766347683240390326,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50
c2bfc932143776fe5aeab49800dea8,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c885fa9516caf6fc0fd470ba7fee82b76c4f0bd2ce66620d319bdadad89f6399,PodSandboxId:e28604768148b056f8e3b02f4cb80458c2adfa189a84d9ccd28ad83fa445d8ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1766347683146173787,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.po
d.name: kube-scheduler-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347c3e9dc7e9e588dc5fd1feb68add2f,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7e08d5b-9337-44a8-8353-e5dfdc25aa08 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.170525634Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=109dff75-e619-43fb-9ccd-c92c5252cfb6 name=/runtime.v1.RuntimeService/Version
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.170615592Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=109dff75-e619-43fb-9ccd-c92c5252cfb6 name=/runtime.v1.RuntimeService/Version
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.172172853Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e43de170-3e43-4c57-8bea-fce800eb8027 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.173248218Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766347851173223721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:216501,},InodesUsed:&UInt64Value{Value:97,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e43de170-3e43-4c57-8bea-fce800eb8027 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.174222596Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d9eac84-bc5d-417e-9655-cbd70ca02d72 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.174304017Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d9eac84-bc5d-417e-9655-cbd70ca02d72 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:10:51 functional-089730 crio[5685]: time="2025-12-21 20:10:51.174884803Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6b81a5792ff6b7f3b9bf957df63be2ccdf7682d1abe17ae0a825b2b0b6bebb2,PodSandboxId:e3ed1ff78915a28d56fa5c1e2f498a05cd7070952c15c30d517e4938ccf97710,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766347848968691685,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e23cc32d-4ece-469e-ab30-b9d6da91c272,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c98b5b625233e8a9da1bf14b1c6bbadf84fac4c441dfe39fdae8f01d350ab86,PodSandboxId:fef0553634746934eb4e4ff3c366cc9f4e8787f5a1466656636ce1a0311a8dc3,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766347842793026855,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 23efd88c-e136-4f52-9ec1-1a751b7895ba,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f33f3b6125f3dac9718e2b27544392953a029ec5eae9e1503b43fc9fad78bdbc,PodSandboxId:7d9e0b536cc6fb59f120bc00735e8efa24289a00352d99f2ae2e20697b5fff26,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1766347762079286320,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-9r6m2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 289487f7-0d17-49ed-81be-8171c3228316,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbe91e48af3a0dc3c616569835aa1492708bc166db654a923e85b13b49c3c33,PodSandboxId:11c5b1a74e282fbfcc5fae1bf73c301a149992fcabb1c7f0af6339188cb39f8d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1766347722486741569,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6smpp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03096d0a-24b6-4db9-a07d-3ca48f199450,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30476b1b24d7e055027c37f6876dd2b3a7167af34d2bc5f1496846ec0d3fbf1f,PodSandboxId:8b268f461c72e326d21f2e421ad6ac1506bf8219c5e8ab91c78bfddfbc705a48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766347722525237366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4291be3-1c09-465a-9574-d7d70f9846bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d88ac7ce6060714f83c1c22c5fdcfe49c4361814cdea91996f3475738b56891c,PodSandboxId:4de5a4cac22709d4293cbf620cd5e3211e07f007442453a34d5720573ecead24,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1766347722500475096,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ntzl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e29723-ca1b-4f92-b6a2-a1679d5a2816,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f929c086126ff2686e21af271eb411c613ada14b105a0acd309a6a6704738e88,PodSandboxId:04cd4698199fb884dc7c2c08c7fba8ed8fcaa6c4d94709914283d89615fde58d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1766347719111025395,Lab
els:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9bacad46b021ae398e328e4a825a460,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4198aac42042dc20d4cf3101e76a1d787833f1041837c2f44bed919ddc3884da,PodSandboxId:ef8159a89b5433a23ce86fe569ca161f0107998bab99f24823299883715c1bd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a
56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1766347718886290305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994b1850d250d98450256a10cffe058b,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a65f0a39880a355c99a7a6703a4a8be4b27a1964d47ee979fdaac8e44fe37dbf,PodSandboxId:e6cf51b2360451f224250a6252d186331dae473ccbb7e2ab17dec793d7df73bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d50120
7f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1766347718859625032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347c3e9dc7e9e588dc5fd1feb68add2f,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89341fc9df43d6401a41e0c78827ece223622180ce5a56c9f2c9aba7b025ed83,PodSandboxId:551acf1af3dc1a187b7add1afcf76c2227a81dc9eb2f8bc2e4a75b6bc89
cd9a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1766347718826465277,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50c2bfc932143776fe5aeab49800dea8,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135b8186acfdfbe3d6f2ba492af723f3eb852911f6ea92f27337224f1f0
8ce98,PodSandboxId:49f246cdc6803065b80908875edf15d61deb53669a4a3d1aee2633163fb2ebba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766347687948902995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4291be3-1c09-465a-9574-d7d70f9846bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da242b28631cc30a2b54ade27c63a355c8f1c2c36743ca5b04c900baae24111a,PodSand
boxId:b9c953e28fbfe93be0b4448ca15a52a23a3aac3d4f4c4ebfe1dfaacf9696f3b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1766347687976488684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ntzl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e29723-ca1b-4f92-b6a2-a1679d5a2816,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"
containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:304c649f30afc1929e74ea3be4e66b048fee5d7cf6cbf1c0b3f467d95f8bdb32,PodSandboxId:80613639237f2962194d8a35562ca1ac9ac0e5f68531ce2077f471ea5b7e2e14,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1766347683335999263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6smpp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03096d0a-24b6-4db9-a07d-3ca48f199450,},Annotations:map[string]string{io.kubernetes.container.hash:
d6e0e1a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b9a10432063f6dafdfb270a0dcb791706dac472b675f79994e50ae8dc7f25e,PodSandboxId:5a95cd1a128c8df2d934a295e2f9e956cc9c7309ec9cec1f91565d6efd3843be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1766347683284473882,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994b1850d250d98450256a10cffe058b,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5533b53ecc8fa1d55a8620ddb265f19587a3ec2a4dd79ea29d2caf054b90bf0,PodSandboxId:78470c1e28a0ce52d630fecbe35e9fd98d89e5eedb5b63cd55007930068a4a7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1766347683240390326,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50
c2bfc932143776fe5aeab49800dea8,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c885fa9516caf6fc0fd470ba7fee82b76c4f0bd2ce66620d319bdadad89f6399,PodSandboxId:e28604768148b056f8e3b02f4cb80458c2adfa189a84d9ccd28ad83fa445d8ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1766347683146173787,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.po
d.name: kube-scheduler-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347c3e9dc7e9e588dc5fd1feb68add2f,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d9eac84-bc5d-417e-9655-cbd70ca02d72 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                         CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e6b81a5792ff6       04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5                                              2 seconds ago        Running             myfrontend                0                   e3ed1ff78915a       sp-pod                                      default
	7c98b5b625233       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e           8 seconds ago        Exited              mount-munger              0                   fef0553634746       busybox-mount                               default
	f33f3b6125f3d       public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036   About a minute ago   Running             mysql                     0                   7d9e0b536cc6f       mysql-7d7b65bc95-9r6m2                      default
	30476b1b24d7e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              2 minutes ago        Running             storage-provisioner       3                   8b268f461c72e       storage-provisioner                         kube-system
	d88ac7ce60607       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                              2 minutes ago        Running             coredns                   2                   4de5a4cac2270       coredns-7d764666f9-ntzl5                    kube-system
	ecbe91e48af3a       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                              2 minutes ago        Running             kube-proxy                2                   11c5b1a74e282       kube-proxy-6smpp                            kube-system
	f929c086126ff       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                              2 minutes ago        Running             kube-apiserver            0                   04cd4698199fb       kube-apiserver-functional-089730            kube-system
	4198aac42042d       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                              2 minutes ago        Running             kube-controller-manager   2                   ef8159a89b543       kube-controller-manager-functional-089730   kube-system
	a65f0a39880a3       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                              2 minutes ago        Running             kube-scheduler            2                   e6cf51b236045       kube-scheduler-functional-089730            kube-system
	89341fc9df43d       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                              2 minutes ago        Running             etcd                      2                   551acf1af3dc1       etcd-functional-089730                      kube-system
	da242b28631cc       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                              2 minutes ago        Exited              coredns                   1                   b9c953e28fbfe       coredns-7d764666f9-ntzl5                    kube-system
	135b8186acfdf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              2 minutes ago        Exited              storage-provisioner       2                   49f246cdc6803       storage-provisioner                         kube-system
	304c649f30afc       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                              2 minutes ago        Exited              kube-proxy                1                   80613639237f2       kube-proxy-6smpp                            kube-system
	88b9a10432063       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                              2 minutes ago        Exited              kube-controller-manager   1                   5a95cd1a128c8       kube-controller-manager-functional-089730   kube-system
	e5533b53ecc8f       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                              2 minutes ago        Exited              etcd                      1                   78470c1e28a0c       etcd-functional-089730                      kube-system
	c885fa9516caf       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                              2 minutes ago        Exited              kube-scheduler            1                   e28604768148b       kube-scheduler-functional-089730            kube-system
	
	
	==> coredns [d88ac7ce6060714f83c1c22c5fdcfe49c4361814cdea91996f3475738b56891c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:36357 - 59019 "HINFO IN 6686439616755241019.5601899628486953117. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017389489s
	
	
	==> coredns [da242b28631cc30a2b54ade27c63a355c8f1c2c36743ca5b04c900baae24111a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:55070 - 63953 "HINFO IN 8837462380039339189.797622655870743556. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.027717276s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-089730
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-089730
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=functional-089730
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T20_07_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 20:07:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-089730
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 20:10:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 20:09:42 +0000   Sun, 21 Dec 2025 20:07:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 20:09:42 +0000   Sun, 21 Dec 2025 20:07:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 20:09:42 +0000   Sun, 21 Dec 2025 20:07:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Dec 2025 20:09:42 +0000   Sun, 21 Dec 2025 20:07:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.143
	  Hostname:    functional-089730
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	System Info:
	  Machine ID:                 b7924d9793944d52ae99ab871371d6d3
	  System UUID:                b7924d97-9394-4d52-ae99-ab871371d6d3
	  Boot ID:                    57f5eda2-a2ef-4802-8d47-5aa4113384a4
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-cb7fv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  default                     hello-node-connect-9f67c86d4-twx5b            0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  default                     mysql-7d7b65bc95-9r6m2                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    104s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 coredns-7d764666f9-ntzl5                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     3m35s
	  kube-system                 etcd-functional-089730                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         3m42s
	  kube-system                 kube-apiserver-functional-089730              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-controller-manager-functional-089730     200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 kube-proxy-6smpp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 kube-scheduler-functional-089730              100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m33s
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-ntrf6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-txnj9          0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  3m36s  node-controller  Node functional-089730 event: Registered Node functional-089730 in Controller
	  Normal  RegisteredNode  2m41s  node-controller  Node functional-089730 event: Registered Node functional-089730 in Controller
	  Normal  RegisteredNode  2m7s   node-controller  Node functional-089730 event: Registered Node functional-089730 in Controller
	
	
	==> dmesg <==
	[Dec21 20:06] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001271] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000307] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.192154] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.088656] kauditd_printk_skb: 1 callbacks suppressed
	[Dec21 20:07] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.131768] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.033488] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.027779] kauditd_printk_skb: 251 callbacks suppressed
	[ +28.266386] kauditd_printk_skb: 39 callbacks suppressed
	[Dec21 20:08] kauditd_printk_skb: 358 callbacks suppressed
	[  +6.797987] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.103594] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.741724] kauditd_printk_skb: 418 callbacks suppressed
	[  +1.465278] kauditd_printk_skb: 131 callbacks suppressed
	[Dec21 20:09] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.000104] kauditd_printk_skb: 110 callbacks suppressed
	[  +0.000071] kauditd_printk_skb: 47 callbacks suppressed
	[Dec21 20:10] kauditd_printk_skb: 26 callbacks suppressed
	[  +2.299023] kauditd_printk_skb: 11 callbacks suppressed
	[  +0.768541] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [89341fc9df43d6401a41e0c78827ece223622180ce5a56c9f2c9aba7b025ed83] <==
	{"level":"info","ts":"2025-12-21T20:09:16.814767Z","caller":"traceutil/trace.go:172","msg":"trace[1106622924] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:736; }","duration":"236.850211ms","start":"2025-12-21T20:09:16.577902Z","end":"2025-12-21T20:09:16.814752Z","steps":["trace[1106622924] 'range keys from in-memory index tree'  (duration: 236.114841ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T20:09:21.023501Z","caller":"traceutil/trace.go:172","msg":"trace[809362791] linearizableReadLoop","detail":"{readStateIndex:826; appliedIndex:826; }","duration":"445.874184ms","start":"2025-12-21T20:09:20.577608Z","end":"2025-12-21T20:09:21.023482Z","steps":["trace[809362791] 'read index received'  (duration: 445.869071ms)","trace[809362791] 'applied index is now lower than readState.Index'  (duration: 4.495µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T20:09:21.023625Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"446.111248ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T20:09:21.023644Z","caller":"traceutil/trace.go:172","msg":"trace[1696225313] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:742; }","duration":"446.160193ms","start":"2025-12-21T20:09:20.577479Z","end":"2025-12-21T20:09:21.023639Z","steps":["trace[1696225313] 'agreement among raft nodes before linearized reading'  (duration: 446.085479ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T20:09:21.023663Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-21T20:09:20.577462Z","time spent":"446.196892ms","remote":"127.0.0.1:53250","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-12-21T20:09:21.024483Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"205.622009ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T20:09:21.024605Z","caller":"traceutil/trace.go:172","msg":"trace[1575988518] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:743; }","duration":"205.747249ms","start":"2025-12-21T20:09:20.818850Z","end":"2025-12-21T20:09:21.024597Z","steps":["trace[1575988518] 'agreement among raft nodes before linearized reading'  (duration: 205.600977ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T20:09:21.024715Z","caller":"traceutil/trace.go:172","msg":"trace[1738677528] transaction","detail":"{read_only:false; response_revision:743; number_of_response:1; }","duration":"496.090127ms","start":"2025-12-21T20:09:20.528617Z","end":"2025-12-21T20:09:21.024707Z","steps":["trace[1738677528] 'process raft request'  (duration: 495.568338ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T20:09:21.025573Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"195.626276ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T20:09:21.025703Z","caller":"traceutil/trace.go:172","msg":"trace[430887472] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:743; }","duration":"195.755456ms","start":"2025-12-21T20:09:20.829940Z","end":"2025-12-21T20:09:21.025696Z","steps":["trace[430887472] 'agreement among raft nodes before linearized reading'  (duration: 195.610107ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T20:09:21.026563Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-21T20:09:20.528601Z","time spent":"496.207591ms","remote":"127.0.0.1:53208","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:742 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-12-21T20:09:25.277670Z","caller":"traceutil/trace.go:172","msg":"trace[443839155] linearizableReadLoop","detail":"{readStateIndex:841; appliedIndex:841; }","duration":"224.483292ms","start":"2025-12-21T20:09:25.053162Z","end":"2025-12-21T20:09:25.277645Z","steps":["trace[443839155] 'read index received'  (duration: 224.477273ms)","trace[443839155] 'applied index is now lower than readState.Index'  (duration: 3.592µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T20:09:25.277823Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"224.655352ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T20:09:25.277868Z","caller":"traceutil/trace.go:172","msg":"trace[1959271698] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:756; }","duration":"224.72507ms","start":"2025-12-21T20:09:25.053136Z","end":"2025-12-21T20:09:25.277861Z","steps":["trace[1959271698] 'agreement among raft nodes before linearized reading'  (duration: 224.606041ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T20:09:25.278312Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"202.750035ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T20:09:25.278585Z","caller":"traceutil/trace.go:172","msg":"trace[1705658733] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:757; }","duration":"203.084293ms","start":"2025-12-21T20:09:25.075492Z","end":"2025-12-21T20:09:25.278577Z","steps":["trace[1705658733] 'agreement among raft nodes before linearized reading'  (duration: 202.726351ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T20:09:25.278488Z","caller":"traceutil/trace.go:172","msg":"trace[1845703208] transaction","detail":"{read_only:false; response_revision:757; number_of_response:1; }","duration":"234.83712ms","start":"2025-12-21T20:09:25.043639Z","end":"2025-12-21T20:09:25.278476Z","steps":["trace[1845703208] 'process raft request'  (duration: 234.315722ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T20:09:27.707255Z","caller":"traceutil/trace.go:172","msg":"trace[737712101] linearizableReadLoop","detail":"{readStateIndex:842; appliedIndex:842; }","duration":"378.500942ms","start":"2025-12-21T20:09:27.328735Z","end":"2025-12-21T20:09:27.707236Z","steps":["trace[737712101] 'read index received'  (duration: 378.47916ms)","trace[737712101] 'applied index is now lower than readState.Index'  (duration: 21.014µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T20:09:27.707371Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"378.620908ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T20:09:27.707434Z","caller":"traceutil/trace.go:172","msg":"trace[882783494] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:757; }","duration":"378.654868ms","start":"2025-12-21T20:09:27.328730Z","end":"2025-12-21T20:09:27.707385Z","steps":["trace[882783494] 'agreement among raft nodes before linearized reading'  (duration: 378.594657ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T20:09:27.707458Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-21T20:09:27.328711Z","time spent":"378.740731ms","remote":"127.0.0.1:53250","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-12-21T20:09:27.707551Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.314604ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T20:09:27.707587Z","caller":"traceutil/trace.go:172","msg":"trace[262742464] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:758; }","duration":"130.354394ms","start":"2025-12-21T20:09:27.577223Z","end":"2025-12-21T20:09:27.707577Z","steps":["trace[262742464] 'agreement among raft nodes before linearized reading'  (duration: 130.300714ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T20:09:27.708810Z","caller":"traceutil/trace.go:172","msg":"trace[1896059710] transaction","detail":"{read_only:false; response_revision:758; number_of_response:1; }","duration":"410.316439ms","start":"2025-12-21T20:09:27.298483Z","end":"2025-12-21T20:09:27.708799Z","steps":["trace[1896059710] 'process raft request'  (duration: 408.959291ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T20:09:27.710567Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-21T20:09:27.298466Z","time spent":"410.84239ms","remote":"127.0.0.1:53208","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:757 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> etcd [e5533b53ecc8fa1d55a8620ddb265f19587a3ec2a4dd79ea29d2caf054b90bf0] <==
	{"level":"info","ts":"2025-12-21T20:08:05.500351Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-21T20:08:05.501341Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-21T20:08:05.501483Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-21T20:08:05.501513Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-21T20:08:05.502263Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-21T20:08:05.504176Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-21T20:08:05.503374Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.143:2379"}
	{"level":"info","ts":"2025-12-21T20:08:27.252671Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-21T20:08:27.265612Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-089730","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.143:2380"],"advertise-client-urls":["https://192.168.39.143:2379"]}
	{"level":"error","ts":"2025-12-21T20:08:27.265814Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-21T20:08:27.339548Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-21T20:08:27.339603Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-21T20:08:27.339631Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"be0eebdc09990bfd","current-leader-member-id":"be0eebdc09990bfd"}
	{"level":"info","ts":"2025-12-21T20:08:27.339715Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-21T20:08:27.339740Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-21T20:08:27.339735Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-21T20:08:27.339787Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-21T20:08:27.339829Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-21T20:08:27.339861Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.143:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-21T20:08:27.339871Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.143:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-21T20:08:27.339876Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.143:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-21T20:08:27.343710Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.143:2380"}
	{"level":"error","ts":"2025-12-21T20:08:27.343851Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.143:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-21T20:08:27.343933Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.143:2380"}
	{"level":"info","ts":"2025-12-21T20:08:27.343942Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-089730","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.143:2380"],"advertise-client-urls":["https://192.168.39.143:2379"]}
	
	
	==> kernel <==
	 20:10:51 up 4 min,  0 users,  load average: 0.43, 0.50, 0.23
	Linux functional-089730 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Dec 20 21:36:01 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [f929c086126ff2686e21af271eb411c613ada14b105a0acd309a6a6704738e88] <==
	I1221 20:08:41.304840       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1221 20:08:41.318057       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1221 20:08:41.330107       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1221 20:08:41.339689       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:08:42.083889       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1221 20:08:42.228764       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1221 20:08:42.940605       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1221 20:08:42.994086       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1221 20:08:43.028477       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 20:08:43.036612       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1221 20:08:44.715497       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 20:08:44.822757       1 controller.go:667] quota admission added evaluator for: endpoints
	I1221 20:08:44.864858       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1221 20:09:02.266182       1 alloc.go:329] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.14.224"}
	I1221 20:09:07.276308       1 alloc.go:329] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.97.71.41"}
	I1221 20:09:07.809284       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.60.116"}
	I1221 20:09:08.571112       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.250.20"}
	E1221 20:09:29.493012       1 conn.go:339] Error on socket receive: read tcp 192.168.39.143:8441->192.168.39.1:39346: use of closed network connection
	E1221 20:09:31.106098       1 conn.go:339] Error on socket receive: read tcp 192.168.39.143:8441->192.168.39.1:39366: use of closed network connection
	E1221 20:09:33.052884       1 conn.go:339] Error on socket receive: read tcp 192.168.39.143:8441->192.168.39.1:39394: use of closed network connection
	E1221 20:09:35.655217       1 conn.go:339] Error on socket receive: read tcp 192.168.39.143:8441->192.168.39.1:38216: use of closed network connection
	E1221 20:10:47.193767       1 conn.go:339] Error on socket receive: read tcp 192.168.39.143:8441->192.168.39.1:44022: use of closed network connection
	I1221 20:10:49.858201       1 controller.go:667] quota admission added evaluator for: namespaces
	I1221 20:10:50.107230       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.47.52"}
	I1221 20:10:50.132630       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.234.203"}
	
	
	==> kube-controller-manager [4198aac42042dc20d4cf3101e76a1d787833f1041837c2f44bed919ddc3884da] <==
	I1221 20:08:44.401515       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.401625       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.401630       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.401639       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.401718       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.401737       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.401752       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.401758       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.401763       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.400746       1 range_allocator.go:177] "Sending events to api server"
	I1221 20:08:44.422039       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1221 20:08:44.422118       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:08:44.422139       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.400762       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.447374       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.485911       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.502366       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.502512       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1221 20:08:44.502519       1 garbagecollector.go:169] "Proceeding to collect garbage"
	E1221 20:10:49.965515       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1221 20:10:49.972537       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1221 20:10:49.984381       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1221 20:10:49.986206       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1221 20:10:49.999925       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1221 20:10:50.000440       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [88b9a10432063f6dafdfb270a0dcb791706dac472b675f79994e50ae8dc7f25e] <==
	I1221 20:08:10.045355       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.047009       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.047039       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.047091       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.047588       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.047629       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.047663       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.047713       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.047760       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.047810       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.047871       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.047931       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.047959       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.048026       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.048079       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.054017       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.054042       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.054062       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.056992       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:08:10.086510       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.142819       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.143213       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1221 20:08:10.143223       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1221 20:08:10.157535       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.536220       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [304c649f30afc1929e74ea3be4e66b048fee5d7cf6cbf1c0b3f467d95f8bdb32] <==
	I1221 20:08:08.211810       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:08:08.312455       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:08.312509       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.143"]
	E1221 20:08:08.312570       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 20:08:08.350934       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1221 20:08:08.350996       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1221 20:08:08.351018       1 server_linux.go:136] "Using iptables Proxier"
	I1221 20:08:08.360472       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 20:08:08.360732       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1221 20:08:08.360761       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:08:08.365438       1 config.go:309] "Starting node config controller"
	I1221 20:08:08.365485       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 20:08:08.365492       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1221 20:08:08.365642       1 config.go:200] "Starting service config controller"
	I1221 20:08:08.365651       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 20:08:08.365665       1 config.go:106] "Starting endpoint slice config controller"
	I1221 20:08:08.365668       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 20:08:08.365678       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 20:08:08.365681       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 20:08:08.465793       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1221 20:08:08.465819       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1221 20:08:08.465866       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [ecbe91e48af3a0dc3c616569835aa1492708bc166db654a923e85b13b49c3c33] <==
	I1221 20:08:42.834580       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:08:42.935507       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:42.936022       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.143"]
	E1221 20:08:42.936712       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 20:08:42.998600       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1221 20:08:42.998716       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1221 20:08:42.998837       1 server_linux.go:136] "Using iptables Proxier"
	I1221 20:08:43.013895       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 20:08:43.014200       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1221 20:08:43.014454       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:08:43.019246       1 config.go:200] "Starting service config controller"
	I1221 20:08:43.019887       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 20:08:43.019947       1 config.go:106] "Starting endpoint slice config controller"
	I1221 20:08:43.019963       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 20:08:43.019984       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 20:08:43.019998       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 20:08:43.022523       1 config.go:309] "Starting node config controller"
	I1221 20:08:43.022563       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 20:08:43.022581       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1221 20:08:43.120838       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1221 20:08:43.121448       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1221 20:08:43.121527       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a65f0a39880a355c99a7a6703a4a8be4b27a1964d47ee979fdaac8e44fe37dbf] <==
	I1221 20:08:40.057223       1 serving.go:386] Generated self-signed cert in-memory
	W1221 20:08:41.161771       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1221 20:08:41.161865       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1221 20:08:41.161886       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1221 20:08:41.161904       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1221 20:08:41.257311       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1221 20:08:41.258205       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:08:41.275921       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 20:08:41.276002       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:08:41.276021       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1221 20:08:41.276004       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1221 20:08:41.377165       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [c885fa9516caf6fc0fd470ba7fee82b76c4f0bd2ce66620d319bdadad89f6399] <==
	I1221 20:08:05.819836       1 serving.go:386] Generated self-signed cert in-memory
	I1221 20:08:06.912628       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1221 20:08:06.912811       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:08:06.918498       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1221 20:08:06.918580       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:08:06.918588       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1221 20:08:06.918625       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 20:08:06.918633       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:08:06.918645       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1221 20:08:06.918654       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:08:06.918722       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1221 20:08:07.019711       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:07.019896       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:07.019990       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:27.280032       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1221 20:08:27.282871       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1221 20:08:27.282931       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1221 20:08:27.282947       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 20:08:27.282968       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1221 20:08:27.288165       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1221 20:08:27.288690       1 server.go:265] "[graceful-termination] secure server is exiting"
	
	
	==> kubelet <==
	Dec 21 20:10:44 functional-089730 kubelet[6456]: I1221 20:10:44.414954    6456 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23efd88c-e136-4f52-9ec1-1a751b7895ba-test-volume" pod "23efd88c-e136-4f52-9ec1-1a751b7895ba" (UID: "23efd88c-e136-4f52-9ec1-1a751b7895ba"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 21 20:10:44 functional-089730 kubelet[6456]: I1221 20:10:44.417960    6456 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23efd88c-e136-4f52-9ec1-1a751b7895ba-kube-api-access-8vxst" pod "23efd88c-e136-4f52-9ec1-1a751b7895ba" (UID: "23efd88c-e136-4f52-9ec1-1a751b7895ba"). InnerVolumeSpecName "kube-api-access-8vxst". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 21 20:10:44 functional-089730 kubelet[6456]: I1221 20:10:44.516013    6456 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8vxst\" (UniqueName: \"kubernetes.io/projected/23efd88c-e136-4f52-9ec1-1a751b7895ba-kube-api-access-8vxst\") on node \"functional-089730\" DevicePath \"\""
	Dec 21 20:10:44 functional-089730 kubelet[6456]: I1221 20:10:44.516061    6456 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/23efd88c-e136-4f52-9ec1-1a751b7895ba-test-volume\") on node \"functional-089730\" DevicePath \"\""
	Dec 21 20:10:45 functional-089730 kubelet[6456]: I1221 20:10:45.206862    6456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fef0553634746934eb4e4ff3c366cc9f4e8787f5a1466656636ce1a0311a8dc3"
	Dec 21 20:10:47 functional-089730 kubelet[6456]: I1221 20:10:47.741521    6456 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/18b36476-f9e3-4d60-abc0-81d26cf443ff-kube-api-access-vj2rj\" (UniqueName: \"kubernetes.io/projected/18b36476-f9e3-4d60-abc0-81d26cf443ff-kube-api-access-vj2rj\") pod \"18b36476-f9e3-4d60-abc0-81d26cf443ff\" (UID: \"18b36476-f9e3-4d60-abc0-81d26cf443ff\") "
	Dec 21 20:10:47 functional-089730 kubelet[6456]: I1221 20:10:47.742374    6456 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/18b36476-f9e3-4d60-abc0-81d26cf443ff-pvc-f08b720f-0983-4b2d-b13f-1563d1ae6a07\" (UniqueName: \"kubernetes.io/host-path/18b36476-f9e3-4d60-abc0-81d26cf443ff-pvc-f08b720f-0983-4b2d-b13f-1563d1ae6a07\") pod \"18b36476-f9e3-4d60-abc0-81d26cf443ff\" (UID: \"18b36476-f9e3-4d60-abc0-81d26cf443ff\") "
	Dec 21 20:10:47 functional-089730 kubelet[6456]: I1221 20:10:47.742516    6456 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18b36476-f9e3-4d60-abc0-81d26cf443ff-pvc-f08b720f-0983-4b2d-b13f-1563d1ae6a07" pod "18b36476-f9e3-4d60-abc0-81d26cf443ff" (UID: "18b36476-f9e3-4d60-abc0-81d26cf443ff"). InnerVolumeSpecName "pvc-f08b720f-0983-4b2d-b13f-1563d1ae6a07". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 21 20:10:47 functional-089730 kubelet[6456]: I1221 20:10:47.743636    6456 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18b36476-f9e3-4d60-abc0-81d26cf443ff-kube-api-access-vj2rj" pod "18b36476-f9e3-4d60-abc0-81d26cf443ff" (UID: "18b36476-f9e3-4d60-abc0-81d26cf443ff"). InnerVolumeSpecName "kube-api-access-vj2rj". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 21 20:10:47 functional-089730 kubelet[6456]: I1221 20:10:47.842826    6456 reconciler_common.go:299] "Volume detached for volume \"pvc-f08b720f-0983-4b2d-b13f-1563d1ae6a07\" (UniqueName: \"kubernetes.io/host-path/18b36476-f9e3-4d60-abc0-81d26cf443ff-pvc-f08b720f-0983-4b2d-b13f-1563d1ae6a07\") on node \"functional-089730\" DevicePath \"\""
	Dec 21 20:10:47 functional-089730 kubelet[6456]: I1221 20:10:47.842853    6456 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vj2rj\" (UniqueName: \"kubernetes.io/projected/18b36476-f9e3-4d60-abc0-81d26cf443ff-kube-api-access-vj2rj\") on node \"functional-089730\" DevicePath \"\""
	Dec 21 20:10:48 functional-089730 kubelet[6456]: I1221 20:10:48.226015    6456 scope.go:122] "RemoveContainer" containerID="c2cdadd23dc22cdff221517280edada95583ebbed28e3302aaf730afdc9d7c5b"
	Dec 21 20:10:48 functional-089730 kubelet[6456]: I1221 20:10:48.363551    6456 scope.go:122] "RemoveContainer" containerID="c2cdadd23dc22cdff221517280edada95583ebbed28e3302aaf730afdc9d7c5b"
	Dec 21 20:10:48 functional-089730 kubelet[6456]: E1221 20:10:48.365180    6456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2cdadd23dc22cdff221517280edada95583ebbed28e3302aaf730afdc9d7c5b\": container with ID starting with c2cdadd23dc22cdff221517280edada95583ebbed28e3302aaf730afdc9d7c5b not found: ID does not exist" containerID="c2cdadd23dc22cdff221517280edada95583ebbed28e3302aaf730afdc9d7c5b"
	Dec 21 20:10:48 functional-089730 kubelet[6456]: I1221 20:10:48.365258    6456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2cdadd23dc22cdff221517280edada95583ebbed28e3302aaf730afdc9d7c5b"} err="failed to get container status \"c2cdadd23dc22cdff221517280edada95583ebbed28e3302aaf730afdc9d7c5b\": rpc error: code = NotFound desc = could not find container \"c2cdadd23dc22cdff221517280edada95583ebbed28e3302aaf730afdc9d7c5b\": container with ID starting with c2cdadd23dc22cdff221517280edada95583ebbed28e3302aaf730afdc9d7c5b not found: ID does not exist"
	Dec 21 20:10:48 functional-089730 kubelet[6456]: E1221 20:10:48.402555    6456 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766347848399324446  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:216501}  inodes_used:{value:97}}"
	Dec 21 20:10:48 functional-089730 kubelet[6456]: E1221 20:10:48.402585    6456 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766347848399324446  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:216501}  inodes_used:{value:97}}"
	Dec 21 20:10:48 functional-089730 kubelet[6456]: I1221 20:10:48.548264    6456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f08b720f-0983-4b2d-b13f-1563d1ae6a07\" (UniqueName: \"kubernetes.io/host-path/e23cc32d-4ece-469e-ab30-b9d6da91c272-pvc-f08b720f-0983-4b2d-b13f-1563d1ae6a07\") pod \"sp-pod\" (UID: \"e23cc32d-4ece-469e-ab30-b9d6da91c272\") " pod="default/sp-pod"
	Dec 21 20:10:48 functional-089730 kubelet[6456]: I1221 20:10:48.548321    6456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r77f4\" (UniqueName: \"kubernetes.io/projected/e23cc32d-4ece-469e-ab30-b9d6da91c272-kube-api-access-r77f4\") pod \"sp-pod\" (UID: \"e23cc32d-4ece-469e-ab30-b9d6da91c272\") " pod="default/sp-pod"
	Dec 21 20:10:50 functional-089730 kubelet[6456]: I1221 20:10:50.029232    6456 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=2.028866943 podStartE2EDuration="2.028866943s" podCreationTimestamp="2025-12-21 20:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:10:49.258176895 +0000 UTC m=+131.189796352" watchObservedRunningTime="2025-12-21 20:10:50.028866943 +0000 UTC m=+131.960486396"
	Dec 21 20:10:50 functional-089730 kubelet[6456]: I1221 20:10:50.161954    6456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7b04123e-2944-41d9-9d4d-1828100c9595-tmp-volume\") pod \"dashboard-metrics-scraper-5565989548-ntrf6\" (UID: \"7b04123e-2944-41d9-9d4d-1828100c9595\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-ntrf6"
	Dec 21 20:10:50 functional-089730 kubelet[6456]: I1221 20:10:50.162035    6456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zvpm\" (UniqueName: \"kubernetes.io/projected/7b04123e-2944-41d9-9d4d-1828100c9595-kube-api-access-9zvpm\") pod \"dashboard-metrics-scraper-5565989548-ntrf6\" (UID: \"7b04123e-2944-41d9-9d4d-1828100c9595\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-ntrf6"
	Dec 21 20:10:50 functional-089730 kubelet[6456]: I1221 20:10:50.162217    6456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/64f9206b-1692-4ba4-9ece-aa4a73035f95-tmp-volume\") pod \"kubernetes-dashboard-b84665fb8-txnj9\" (UID: \"64f9206b-1692-4ba4-9ece-aa4a73035f95\") " pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-txnj9"
	Dec 21 20:10:50 functional-089730 kubelet[6456]: I1221 20:10:50.162238    6456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqq59\" (UniqueName: \"kubernetes.io/projected/64f9206b-1692-4ba4-9ece-aa4a73035f95-kube-api-access-rqq59\") pod \"kubernetes-dashboard-b84665fb8-txnj9\" (UID: \"64f9206b-1692-4ba4-9ece-aa4a73035f95\") " pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-txnj9"
	Dec 21 20:10:50 functional-089730 kubelet[6456]: I1221 20:10:50.214528    6456 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="18b36476-f9e3-4d60-abc0-81d26cf443ff" path="/var/lib/kubelet/pods/18b36476-f9e3-4d60-abc0-81d26cf443ff/volumes"
	
	
	==> storage-provisioner [135b8186acfdfbe3d6f2ba492af723f3eb852911f6ea92f27337224f1f08ce98] <==
	I1221 20:08:08.191974       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1221 20:08:08.214171       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1221 20:08:08.214258       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1221 20:08:08.226136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:08:11.682054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:08:15.948012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:08:19.546310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:08:22.600980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:08:25.623361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:08:25.634000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1221 20:08:25.634669       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1221 20:08:25.634852       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-089730_abbefef7-221a-4763-abeb-b90ce68acc12!
	I1221 20:08:25.635037       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"47f7b129-41c7-4aa9-9e55-5611e4d8b123", APIVersion:"v1", ResourceVersion:"534", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-089730_abbefef7-221a-4763-abeb-b90ce68acc12 became leader
	W1221 20:08:25.644548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:08:25.652428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1221 20:08:25.735891       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-089730_abbefef7-221a-4763-abeb-b90ce68acc12!
	
	
	==> storage-provisioner [30476b1b24d7e055027c37f6876dd2b3a7167af34d2bc5f1496846ec0d3fbf1f] <==
	W1221 20:10:26.167629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:10:28.171313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:10:28.179814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:10:30.183979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:10:30.190143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:10:32.195109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:10:32.202813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:10:34.207750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:10:34.216870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:10:36.219918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:10:36.229698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:10:38.233510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:10:38.238038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:10:40.244024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:10:40.260521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:10:42.262879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:10:42.271584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:10:44.275497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:10:44.283790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:10:46.287086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:10:46.292851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:10:48.296173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:10:48.305084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:10:50.308876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:10:50.313798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-089730 -n functional-089730
helpers_test.go:270: (dbg) Run:  kubectl --context functional-089730 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-5758569b79-cb7fv hello-node-connect-9f67c86d4-twx5b dashboard-metrics-scraper-5565989548-ntrf6 kubernetes-dashboard-b84665fb8-txnj9
helpers_test.go:283: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-089730 describe pod busybox-mount hello-node-5758569b79-cb7fv hello-node-connect-9f67c86d4-twx5b dashboard-metrics-scraper-5565989548-ntrf6 kubernetes-dashboard-b84665fb8-txnj9
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-089730 describe pod busybox-mount hello-node-5758569b79-cb7fv hello-node-connect-9f67c86d4-twx5b dashboard-metrics-scraper-5565989548-ntrf6 kubernetes-dashboard-b84665fb8-txnj9: exit status 1 (84.691408ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-089730/192.168.39.143
	Start Time:       Sun, 21 Dec 2025 20:09:44 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  cri-o://7c98b5b625233e8a9da1bf14b1c6bbadf84fac4c441dfe39fdae8f01d350ab86
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 21 Dec 2025 20:10:42 +0000
	      Finished:     Sun, 21 Dec 2025 20:10:42 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8vxst (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-8vxst:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  68s   default-scheduler  Successfully assigned default/busybox-mount to functional-089730
	  Normal  Pulling    68s   kubelet            spec.containers{mount-munger}: Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10s   kubelet            spec.containers{mount-munger}: Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.276s (57.938s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10s   kubelet            spec.containers{mount-munger}: Container created
	  Normal  Started    10s   kubelet            spec.containers{mount-munger}: Container started
	
	
	Name:             hello-node-5758569b79-cb7fv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-089730/192.168.39.143
	Start Time:       Sun, 21 Dec 2025 20:09:07 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vfkxv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vfkxv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  105s                default-scheduler  Successfully assigned default/hello-node-5758569b79-cb7fv to functional-089730
	  Warning  Failed     46s                 kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     46s                 kubelet            spec.containers{echo-server}: Error: ErrImagePull
	  Normal   BackOff    45s                 kubelet            spec.containers{echo-server}: Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     45s                 kubelet            spec.containers{echo-server}: Error: ImagePullBackOff
	  Normal   Pulling    31s (x2 over 104s)  kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-9f67c86d4-twx5b
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-089730/192.168.39.143
	Start Time:       Sun, 21 Dec 2025 20:09:08 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fkn52 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-fkn52:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  104s               default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-twx5b to functional-089730
	  Warning  Failed     16s                kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     16s                kubelet            spec.containers{echo-server}: Error: ErrImagePull
	  Normal   BackOff    15s                kubelet            spec.containers{echo-server}: Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     15s                kubelet            spec.containers{echo-server}: Error: ImagePullBackOff
	  Normal   Pulling    0s (x2 over 103s)  kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-ntrf6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-txnj9" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-089730 describe pod busybox-mount hello-node-5758569b79-cb7fv hello-node-connect-9f67c86d4-twx5b dashboard-metrics-scraper-5565989548-ntrf6 kubernetes-dashboard-b84665fb8-txnj9: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (3.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (602.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-089730 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-089730 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-twx5b" [b4244c12-9087-4028-9616-2bec16ef9155] Pending
helpers_test.go:353: "hello-node-connect-9f67c86d4-twx5b" [b4244c12-9087-4028-9616-2bec16ef9155] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-089730 -n functional-089730
functional_test.go:1645: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-21 20:19:08.815890399 +0000 UTC m=+1977.653682246
functional_test.go:1645: (dbg) Run:  kubectl --context functional-089730 describe po hello-node-connect-9f67c86d4-twx5b -n default
functional_test.go:1645: (dbg) kubectl --context functional-089730 describe po hello-node-connect-9f67c86d4-twx5b -n default:
Name:             hello-node-connect-9f67c86d4-twx5b
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-089730/192.168.39.143
Start Time:       Sun, 21 Dec 2025 20:09:08 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fkn52 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-fkn52:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-twx5b to functional-089730
Warning  Failed     3m43s                 kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m2s (x4 over 9m59s)  kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"
Warning  Failed     72s (x3 over 8m32s)   kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     72s (x4 over 8m32s)   kubelet            spec.containers{echo-server}: Error: ErrImagePull
Normal   BackOff    6s (x9 over 8m31s)    kubelet            spec.containers{echo-server}: Back-off pulling image "kicbase/echo-server"
Warning  Failed     6s (x9 over 8m31s)    kubelet            spec.containers{echo-server}: Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-089730 logs hello-node-connect-9f67c86d4-twx5b -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-089730 logs hello-node-connect-9f67c86d4-twx5b -n default: exit status 1 (67.383914ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-twx5b" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-089730 logs hello-node-connect-9f67c86d4-twx5b -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-089730 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-9f67c86d4-twx5b
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-089730/192.168.39.143
Start Time:       Sun, 21 Dec 2025 20:09:08 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fkn52 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-fkn52:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-twx5b to functional-089730
Warning  Failed     3m44s                kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m3s (x4 over 10m)   kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"
Warning  Failed     73s (x3 over 8m33s)  kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     73s (x4 over 8m33s)  kubelet            spec.containers{echo-server}: Error: ErrImagePull
Normal   BackOff    7s (x9 over 8m32s)   kubelet            spec.containers{echo-server}: Back-off pulling image "kicbase/echo-server"
Warning  Failed     7s (x9 over 8m32s)   kubelet            spec.containers{echo-server}: Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-089730 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-089730 logs -l app=hello-node-connect: exit status 1 (71.60944ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-twx5b" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-089730 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-089730 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.110.250.20
IPs:                      10.110.250.20
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32748/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-089730 -n functional-089730
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-089730 logs -n 25: (1.432915007s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                   ARGS                                                                    │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-089730 ssh -- ls -la /mount-9p                                                                                                 │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │ 21 Dec 25 20:10 UTC │
	│ ssh            │ functional-089730 ssh sudo umount -f /mount-9p                                                                                            │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │                     │
	│ mount          │ -p functional-089730 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1947649763/001:/mount1 --alsologtostderr -v=1      │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │                     │
	│ mount          │ -p functional-089730 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1947649763/001:/mount2 --alsologtostderr -v=1      │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │                     │
	│ ssh            │ functional-089730 ssh findmnt -T /mount1                                                                                                  │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │                     │
	│ mount          │ -p functional-089730 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1947649763/001:/mount3 --alsologtostderr -v=1      │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │                     │
	│ ssh            │ functional-089730 ssh findmnt -T /mount1                                                                                                  │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │ 21 Dec 25 20:10 UTC │
	│ ssh            │ functional-089730 ssh findmnt -T /mount2                                                                                                  │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │ 21 Dec 25 20:10 UTC │
	│ ssh            │ functional-089730 ssh findmnt -T /mount3                                                                                                  │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │ 21 Dec 25 20:10 UTC │
	│ mount          │ -p functional-089730 --kill=true                                                                                                          │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │                     │
	│ start          │ -p functional-089730 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │                     │
	│ start          │ -p functional-089730 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1           │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │                     │
	│ start          │ -p functional-089730 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-089730 --alsologtostderr -v=1                                                                            │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │                     │
	│ update-context │ functional-089730 update-context --alsologtostderr -v=2                                                                                   │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │ 21 Dec 25 20:10 UTC │
	│ update-context │ functional-089730 update-context --alsologtostderr -v=2                                                                                   │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │ 21 Dec 25 20:10 UTC │
	│ update-context │ functional-089730 update-context --alsologtostderr -v=2                                                                                   │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │ 21 Dec 25 20:10 UTC │
	│ image          │ functional-089730 image ls --format short --alsologtostderr                                                                               │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │ 21 Dec 25 20:10 UTC │
	│ image          │ functional-089730 image ls --format yaml --alsologtostderr                                                                                │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │ 21 Dec 25 20:10 UTC │
	│ ssh            │ functional-089730 ssh pgrep buildkitd                                                                                                     │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │                     │
	│ image          │ functional-089730 image build -t localhost/my-image:functional-089730 testdata/build --alsologtostderr                                    │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │ 21 Dec 25 20:10 UTC │
	│ image          │ functional-089730 image ls --format json --alsologtostderr                                                                                │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │ 21 Dec 25 20:10 UTC │
	│ image          │ functional-089730 image ls --format table --alsologtostderr                                                                               │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │ 21 Dec 25 20:10 UTC │
	│ image          │ functional-089730 image ls                                                                                                                │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:10 UTC │ 21 Dec 25 20:10 UTC │
	│ service        │ functional-089730 service list                                                                                                            │ functional-089730 │ jenkins │ v1.37.0 │ 21 Dec 25 20:19 UTC │ 21 Dec 25 20:19 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 20:10:48
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 20:10:48.874776  137625 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:10:48.874961  137625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:10:48.874974  137625 out.go:374] Setting ErrFile to fd 2...
	I1221 20:10:48.874980  137625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:10:48.875432  137625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
	I1221 20:10:48.876089  137625 out.go:368] Setting JSON to false
	I1221 20:10:48.877361  137625 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":13999,"bootTime":1766333850,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 20:10:48.877445  137625 start.go:143] virtualization: kvm guest
	I1221 20:10:48.879534  137625 out.go:179] * [functional-089730] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1221 20:10:48.881061  137625 notify.go:221] Checking for updates...
	I1221 20:10:48.881083  137625 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 20:10:48.882524  137625 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 20:10:48.884057  137625 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	I1221 20:10:48.885365  137625 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	I1221 20:10:48.886412  137625 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 20:10:48.887645  137625 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 20:10:48.889464  137625 config.go:182] Loaded profile config "functional-089730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:10:48.890237  137625 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 20:10:48.921546  137625 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1221 20:10:48.922904  137625 start.go:309] selected driver: kvm2
	I1221 20:10:48.922919  137625 start.go:928] validating driver "kvm2" against &{Name:functional-089730 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-089730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.143 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:10:48.923086  137625 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 20:10:48.925076  137625 out.go:203] 
	W1221 20:10:48.926169  137625 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1221 20:10:48.927191  137625 out.go:203] 
	
	
	==> CRI-O <==
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.874549223Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766348349874520018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:242160,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=886d1f8d-0b53-4818-b720-a10a7606dfb8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.875572794Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=851a6ad4-c900-427b-888b-215e3dc18ab0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.875630725Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=851a6ad4-c900-427b-888b-215e3dc18ab0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.875931491Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6b81a5792ff6b7f3b9bf957df63be2ccdf7682d1abe17ae0a825b2b0b6bebb2,PodSandboxId:e3ed1ff78915a28d56fa5c1e2f498a05cd7070952c15c30d517e4938ccf97710,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766347848968691685,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e23cc32d-4ece-469e-ab30-b9d6da91c272,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c98b5b625233e8a9da1bf14b1c6bbadf84fac4c441dfe39fdae8f01d350ab86,PodSandboxId:fef0553634746934eb4e4ff3c366cc9f4e8787f5a1466656636ce1a0311a8dc3,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766347842793026855,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 23efd88c-e136-4f52-9ec1-1a751b7895ba,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f33f3b6125f3dac9718e2b27544392953a029ec5eae9e1503b43fc9fad78bdbc,PodSandboxId:7d9e0b536cc6fb59f120bc00735e8efa24289a00352d99f2ae2e20697b5fff26,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1766347762079286320,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-9r6m2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 289487f7-0d17-49ed-81be-8171c3228316,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbe91e48af3a0dc3c616569835aa1492708bc166db654a923e85b13b49c3c33,PodSandboxId:11c5b1a74e282fbfcc5fae1bf73c301a149992fcabb1c7f0af6339188cb39f8d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1766347722486741569,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6smpp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03096d0a-24b6-4db9-a07d-3ca48f199450,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30476b1b24d7e055027c37f6876dd2b3a7167af34d2bc5f1496846ec0d3fbf1f,PodSandboxId:8b268f461c72e326d21f2e421ad6ac1506bf8219c5e8ab91c78bfddfbc705a48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766347722525237366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4291be3-1c09-465a-9574-d7d70f9846bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d88ac7ce6060714f83c1c22c5fdcfe49c4361814cdea91996f3475738b56891c,PodSandboxId:4de5a4cac22709d4293cbf620cd5e3211e07f007442453a34d5720573ecead24,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1766347722500475096,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ntzl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e29723-ca1b-4f92-b6a2-a1679d5a2816,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f929c086126ff2686e21af271eb411c613ada14b105a0acd309a6a6704738e88,PodSandboxId:04cd4698199fb884dc7c2c08c7fba8ed8fcaa6c4d94709914283d89615fde58d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1766347719111025395,Lab
els:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9bacad46b021ae398e328e4a825a460,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4198aac42042dc20d4cf3101e76a1d787833f1041837c2f44bed919ddc3884da,PodSandboxId:ef8159a89b5433a23ce86fe569ca161f0107998bab99f24823299883715c1bd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a
56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1766347718886290305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994b1850d250d98450256a10cffe058b,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a65f0a39880a355c99a7a6703a4a8be4b27a1964d47ee979fdaac8e44fe37dbf,PodSandboxId:e6cf51b2360451f224250a6252d186331dae473ccbb7e2ab17dec793d7df73bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d50120
7f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1766347718859625032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347c3e9dc7e9e588dc5fd1feb68add2f,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89341fc9df43d6401a41e0c78827ece223622180ce5a56c9f2c9aba7b025ed83,PodSandboxId:551acf1af3dc1a187b7add1afcf76c2227a81dc9eb2f8bc2e4a75b6bc89
cd9a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1766347718826465277,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50c2bfc932143776fe5aeab49800dea8,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135b8186acfdfbe3d6f2ba492af723f3eb852911f6ea92f27337224f1f0
8ce98,PodSandboxId:49f246cdc6803065b80908875edf15d61deb53669a4a3d1aee2633163fb2ebba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766347687948902995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4291be3-1c09-465a-9574-d7d70f9846bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da242b28631cc30a2b54ade27c63a355c8f1c2c36743ca5b04c900baae24111a,PodSand
boxId:b9c953e28fbfe93be0b4448ca15a52a23a3aac3d4f4c4ebfe1dfaacf9696f3b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1766347687976488684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ntzl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e29723-ca1b-4f92-b6a2-a1679d5a2816,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"
containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:304c649f30afc1929e74ea3be4e66b048fee5d7cf6cbf1c0b3f467d95f8bdb32,PodSandboxId:80613639237f2962194d8a35562ca1ac9ac0e5f68531ce2077f471ea5b7e2e14,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1766347683335999263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6smpp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03096d0a-24b6-4db9-a07d-3ca48f199450,},Annotations:map[string]string{io.kubernetes.container.hash:
d6e0e1a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b9a10432063f6dafdfb270a0dcb791706dac472b675f79994e50ae8dc7f25e,PodSandboxId:5a95cd1a128c8df2d934a295e2f9e956cc9c7309ec9cec1f91565d6efd3843be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1766347683284473882,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994b1850d250d98450256a10cffe058b,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5533b53ecc8fa1d55a8620ddb265f19587a3ec2a4dd79ea29d2caf054b90bf0,PodSandboxId:78470c1e28a0ce52d630fecbe35e9fd98d89e5eedb5b63cd55007930068a4a7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1766347683240390326,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50
c2bfc932143776fe5aeab49800dea8,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c885fa9516caf6fc0fd470ba7fee82b76c4f0bd2ce66620d319bdadad89f6399,PodSandboxId:e28604768148b056f8e3b02f4cb80458c2adfa189a84d9ccd28ad83fa445d8ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1766347683146173787,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.po
d.name: kube-scheduler-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347c3e9dc7e9e588dc5fd1feb68add2f,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=851a6ad4-c900-427b-888b-215e3dc18ab0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.917723422Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=356cf306-74e3-4edf-aa8a-9b2a01dbfa43 name=/runtime.v1.RuntimeService/Version
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.917817571Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=356cf306-74e3-4edf-aa8a-9b2a01dbfa43 name=/runtime.v1.RuntimeService/Version
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.919533134Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ebd0bdb0-963c-4c00-9999-3a351a7df25d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.920804437Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766348349920776469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:242160,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ebd0bdb0-963c-4c00-9999-3a351a7df25d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.921933180Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=674cc416-37ae-4970-bf71-f0f33da5b511 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.922005796Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=674cc416-37ae-4970-bf71-f0f33da5b511 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.922344790Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6b81a5792ff6b7f3b9bf957df63be2ccdf7682d1abe17ae0a825b2b0b6bebb2,PodSandboxId:e3ed1ff78915a28d56fa5c1e2f498a05cd7070952c15c30d517e4938ccf97710,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766347848968691685,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e23cc32d-4ece-469e-ab30-b9d6da91c272,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c98b5b625233e8a9da1bf14b1c6bbadf84fac4c441dfe39fdae8f01d350ab86,PodSandboxId:fef0553634746934eb4e4ff3c366cc9f4e8787f5a1466656636ce1a0311a8dc3,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766347842793026855,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 23efd88c-e136-4f52-9ec1-1a751b7895ba,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f33f3b6125f3dac9718e2b27544392953a029ec5eae9e1503b43fc9fad78bdbc,PodSandboxId:7d9e0b536cc6fb59f120bc00735e8efa24289a00352d99f2ae2e20697b5fff26,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1766347762079286320,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-9r6m2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 289487f7-0d17-49ed-81be-8171c3228316,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbe91e48af3a0dc3c616569835aa1492708bc166db654a923e85b13b49c3c33,PodSandboxId:11c5b1a74e282fbfcc5fae1bf73c301a149992fcabb1c7f0af6339188cb39f8d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1766347722486741569,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6smpp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03096d0a-24b6-4db9-a07d-3ca48f199450,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30476b1b24d7e055027c37f6876dd2b3a7167af34d2bc5f1496846ec0d3fbf1f,PodSandboxId:8b268f461c72e326d21f2e421ad6ac1506bf8219c5e8ab91c78bfddfbc705a48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766347722525237366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4291be3-1c09-465a-9574-d7d70f9846bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d88ac7ce6060714f83c1c22c5fdcfe49c4361814cdea91996f3475738b56891c,PodSandboxId:4de5a4cac22709d4293cbf620cd5e3211e07f007442453a34d5720573ecead24,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1766347722500475096,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ntzl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e29723-ca1b-4f92-b6a2-a1679d5a2816,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f929c086126ff2686e21af271eb411c613ada14b105a0acd309a6a6704738e88,PodSandboxId:04cd4698199fb884dc7c2c08c7fba8ed8fcaa6c4d94709914283d89615fde58d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1766347719111025395,Lab
els:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9bacad46b021ae398e328e4a825a460,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4198aac42042dc20d4cf3101e76a1d787833f1041837c2f44bed919ddc3884da,PodSandboxId:ef8159a89b5433a23ce86fe569ca161f0107998bab99f24823299883715c1bd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a
56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1766347718886290305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994b1850d250d98450256a10cffe058b,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a65f0a39880a355c99a7a6703a4a8be4b27a1964d47ee979fdaac8e44fe37dbf,PodSandboxId:e6cf51b2360451f224250a6252d186331dae473ccbb7e2ab17dec793d7df73bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d50120
7f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1766347718859625032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347c3e9dc7e9e588dc5fd1feb68add2f,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89341fc9df43d6401a41e0c78827ece223622180ce5a56c9f2c9aba7b025ed83,PodSandboxId:551acf1af3dc1a187b7add1afcf76c2227a81dc9eb2f8bc2e4a75b6bc89
cd9a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1766347718826465277,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50c2bfc932143776fe5aeab49800dea8,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135b8186acfdfbe3d6f2ba492af723f3eb852911f6ea92f27337224f1f0
8ce98,PodSandboxId:49f246cdc6803065b80908875edf15d61deb53669a4a3d1aee2633163fb2ebba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766347687948902995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4291be3-1c09-465a-9574-d7d70f9846bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da242b28631cc30a2b54ade27c63a355c8f1c2c36743ca5b04c900baae24111a,PodSand
boxId:b9c953e28fbfe93be0b4448ca15a52a23a3aac3d4f4c4ebfe1dfaacf9696f3b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1766347687976488684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ntzl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e29723-ca1b-4f92-b6a2-a1679d5a2816,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"
containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:304c649f30afc1929e74ea3be4e66b048fee5d7cf6cbf1c0b3f467d95f8bdb32,PodSandboxId:80613639237f2962194d8a35562ca1ac9ac0e5f68531ce2077f471ea5b7e2e14,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1766347683335999263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6smpp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03096d0a-24b6-4db9-a07d-3ca48f199450,},Annotations:map[string]string{io.kubernetes.container.hash:
d6e0e1a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b9a10432063f6dafdfb270a0dcb791706dac472b675f79994e50ae8dc7f25e,PodSandboxId:5a95cd1a128c8df2d934a295e2f9e956cc9c7309ec9cec1f91565d6efd3843be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1766347683284473882,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994b1850d250d98450256a10cffe058b,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5533b53ecc8fa1d55a8620ddb265f19587a3ec2a4dd79ea29d2caf054b90bf0,PodSandboxId:78470c1e28a0ce52d630fecbe35e9fd98d89e5eedb5b63cd55007930068a4a7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1766347683240390326,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50
c2bfc932143776fe5aeab49800dea8,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c885fa9516caf6fc0fd470ba7fee82b76c4f0bd2ce66620d319bdadad89f6399,PodSandboxId:e28604768148b056f8e3b02f4cb80458c2adfa189a84d9ccd28ad83fa445d8ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1766347683146173787,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.po
d.name: kube-scheduler-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347c3e9dc7e9e588dc5fd1feb68add2f,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=674cc416-37ae-4970-bf71-f0f33da5b511 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.953213860Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9f778ae1-32ff-403c-96ed-0be64bc56df9 name=/runtime.v1.RuntimeService/Version
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.953326381Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9f778ae1-32ff-403c-96ed-0be64bc56df9 name=/runtime.v1.RuntimeService/Version
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.954720189Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6f272e11-d50b-4194-b7bd-3d8f91991459 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.955517115Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766348349955492156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:242160,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f272e11-d50b-4194-b7bd-3d8f91991459 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.956696268Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c20f5276-e86e-4a22-b823-336e8e677cfa name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.956766548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c20f5276-e86e-4a22-b823-336e8e677cfa name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.957105961Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6b81a5792ff6b7f3b9bf957df63be2ccdf7682d1abe17ae0a825b2b0b6bebb2,PodSandboxId:e3ed1ff78915a28d56fa5c1e2f498a05cd7070952c15c30d517e4938ccf97710,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766347848968691685,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e23cc32d-4ece-469e-ab30-b9d6da91c272,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c98b5b625233e8a9da1bf14b1c6bbadf84fac4c441dfe39fdae8f01d350ab86,PodSandboxId:fef0553634746934eb4e4ff3c366cc9f4e8787f5a1466656636ce1a0311a8dc3,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766347842793026855,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 23efd88c-e136-4f52-9ec1-1a751b7895ba,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f33f3b6125f3dac9718e2b27544392953a029ec5eae9e1503b43fc9fad78bdbc,PodSandboxId:7d9e0b536cc6fb59f120bc00735e8efa24289a00352d99f2ae2e20697b5fff26,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1766347762079286320,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-9r6m2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 289487f7-0d17-49ed-81be-8171c3228316,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbe91e48af3a0dc3c616569835aa1492708bc166db654a923e85b13b49c3c33,PodSandboxId:11c5b1a74e282fbfcc5fae1bf73c301a149992fcabb1c7f0af6339188cb39f8d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1766347722486741569,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6smpp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03096d0a-24b6-4db9-a07d-3ca48f199450,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30476b1b24d7e055027c37f6876dd2b3a7167af34d2bc5f1496846ec0d3fbf1f,PodSandboxId:8b268f461c72e326d21f2e421ad6ac1506bf8219c5e8ab91c78bfddfbc705a48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766347722525237366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4291be3-1c09-465a-9574-d7d70f9846bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d88ac7ce6060714f83c1c22c5fdcfe49c4361814cdea91996f3475738b56891c,PodSandboxId:4de5a4cac22709d4293cbf620cd5e3211e07f007442453a34d5720573ecead24,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1766347722500475096,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ntzl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e29723-ca1b-4f92-b6a2-a1679d5a2816,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f929c086126ff2686e21af271eb411c613ada14b105a0acd309a6a6704738e88,PodSandboxId:04cd4698199fb884dc7c2c08c7fba8ed8fcaa6c4d94709914283d89615fde58d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1766347719111025395,Lab
els:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9bacad46b021ae398e328e4a825a460,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4198aac42042dc20d4cf3101e76a1d787833f1041837c2f44bed919ddc3884da,PodSandboxId:ef8159a89b5433a23ce86fe569ca161f0107998bab99f24823299883715c1bd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a
56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1766347718886290305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994b1850d250d98450256a10cffe058b,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a65f0a39880a355c99a7a6703a4a8be4b27a1964d47ee979fdaac8e44fe37dbf,PodSandboxId:e6cf51b2360451f224250a6252d186331dae473ccbb7e2ab17dec793d7df73bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d50120
7f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1766347718859625032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347c3e9dc7e9e588dc5fd1feb68add2f,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89341fc9df43d6401a41e0c78827ece223622180ce5a56c9f2c9aba7b025ed83,PodSandboxId:551acf1af3dc1a187b7add1afcf76c2227a81dc9eb2f8bc2e4a75b6bc89
cd9a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1766347718826465277,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50c2bfc932143776fe5aeab49800dea8,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135b8186acfdfbe3d6f2ba492af723f3eb852911f6ea92f27337224f1f0
8ce98,PodSandboxId:49f246cdc6803065b80908875edf15d61deb53669a4a3d1aee2633163fb2ebba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766347687948902995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4291be3-1c09-465a-9574-d7d70f9846bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da242b28631cc30a2b54ade27c63a355c8f1c2c36743ca5b04c900baae24111a,PodSand
boxId:b9c953e28fbfe93be0b4448ca15a52a23a3aac3d4f4c4ebfe1dfaacf9696f3b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1766347687976488684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ntzl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e29723-ca1b-4f92-b6a2-a1679d5a2816,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"
containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:304c649f30afc1929e74ea3be4e66b048fee5d7cf6cbf1c0b3f467d95f8bdb32,PodSandboxId:80613639237f2962194d8a35562ca1ac9ac0e5f68531ce2077f471ea5b7e2e14,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1766347683335999263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6smpp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03096d0a-24b6-4db9-a07d-3ca48f199450,},Annotations:map[string]string{io.kubernetes.container.hash:
d6e0e1a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b9a10432063f6dafdfb270a0dcb791706dac472b675f79994e50ae8dc7f25e,PodSandboxId:5a95cd1a128c8df2d934a295e2f9e956cc9c7309ec9cec1f91565d6efd3843be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1766347683284473882,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994b1850d250d98450256a10cffe058b,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5533b53ecc8fa1d55a8620ddb265f19587a3ec2a4dd79ea29d2caf054b90bf0,PodSandboxId:78470c1e28a0ce52d630fecbe35e9fd98d89e5eedb5b63cd55007930068a4a7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1766347683240390326,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50
c2bfc932143776fe5aeab49800dea8,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c885fa9516caf6fc0fd470ba7fee82b76c4f0bd2ce66620d319bdadad89f6399,PodSandboxId:e28604768148b056f8e3b02f4cb80458c2adfa189a84d9ccd28ad83fa445d8ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1766347683146173787,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.po
d.name: kube-scheduler-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347c3e9dc7e9e588dc5fd1feb68add2f,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c20f5276-e86e-4a22-b823-336e8e677cfa name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.988757744Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a43c3eb3-40d2-4628-be7e-4574a401eacd name=/runtime.v1.RuntimeService/Version
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.988834156Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a43c3eb3-40d2-4628-be7e-4574a401eacd name=/runtime.v1.RuntimeService/Version
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.990656531Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=848ed5f0-3a01-426d-92f7-e205eadf211a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.991359678Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766348349991288628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:242160,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=848ed5f0-3a01-426d-92f7-e205eadf211a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.992600158Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7541572-9a3c-49b3-ace2-4e3c0a110c41 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.992721915Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7541572-9a3c-49b3-ace2-4e3c0a110c41 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 20:19:09 functional-089730 crio[5685]: time="2025-12-21 20:19:09.993124078Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6b81a5792ff6b7f3b9bf957df63be2ccdf7682d1abe17ae0a825b2b0b6bebb2,PodSandboxId:e3ed1ff78915a28d56fa5c1e2f498a05cd7070952c15c30d517e4938ccf97710,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5,State:CONTAINER_RUNNING,CreatedAt:1766347848968691685,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e23cc32d-4ece-469e-ab30-b9d6da91c272,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c98b5b625233e8a9da1bf14b1c6bbadf84fac4c441dfe39fdae8f01d350ab86,PodSandboxId:fef0553634746934eb4e4ff3c366cc9f4e8787f5a1466656636ce1a0311a8dc3,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1766347842793026855,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 23efd88c-e136-4f52-9ec1-1a751b7895ba,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f33f3b6125f3dac9718e2b27544392953a029ec5eae9e1503b43fc9fad78bdbc,PodSandboxId:7d9e0b536cc6fb59f120bc00735e8efa24289a00352d99f2ae2e20697b5fff26,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1766347762079286320,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-9r6m2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 289487f7-0d17-49ed-81be-8171c3228316,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbe91e48af3a0dc3c616569835aa1492708bc166db654a923e85b13b49c3c33,PodSandboxId:11c5b1a74e282fbfcc5fae1bf73c301a149992fcabb1c7f0af6339188cb39f8d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1766347722486741569,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6smpp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03096d0a-24b6-4db9-a07d-3ca48f199450,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30476b1b24d7e055027c37f6876dd2b3a7167af34d2bc5f1496846ec0d3fbf1f,PodSandboxId:8b268f461c72e326d21f2e421ad6ac1506bf8219c5e8ab91c78bfddfbc705a48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1766347722525237366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4291be3-1c09-465a-9574-d7d70f9846bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d88ac7ce6060714f83c1c22c5fdcfe49c4361814cdea91996f3475738b56891c,PodSandboxId:4de5a4cac22709d4293cbf620cd5e3211e07f007442453a34d5720573ecead24,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1766347722500475096,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ntzl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e29723-ca1b-4f92-b6a2-a1679d5a2816,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f929c086126ff2686e21af271eb411c613ada14b105a0acd309a6a6704738e88,PodSandboxId:04cd4698199fb884dc7c2c08c7fba8ed8fcaa6c4d94709914283d89615fde58d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1766347719111025395,Lab
els:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9bacad46b021ae398e328e4a825a460,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4198aac42042dc20d4cf3101e76a1d787833f1041837c2f44bed919ddc3884da,PodSandboxId:ef8159a89b5433a23ce86fe569ca161f0107998bab99f24823299883715c1bd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a
56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1766347718886290305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994b1850d250d98450256a10cffe058b,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a65f0a39880a355c99a7a6703a4a8be4b27a1964d47ee979fdaac8e44fe37dbf,PodSandboxId:e6cf51b2360451f224250a6252d186331dae473ccbb7e2ab17dec793d7df73bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d50120
7f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1766347718859625032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347c3e9dc7e9e588dc5fd1feb68add2f,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89341fc9df43d6401a41e0c78827ece223622180ce5a56c9f2c9aba7b025ed83,PodSandboxId:551acf1af3dc1a187b7add1afcf76c2227a81dc9eb2f8bc2e4a75b6bc89
cd9a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1766347718826465277,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50c2bfc932143776fe5aeab49800dea8,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135b8186acfdfbe3d6f2ba492af723f3eb852911f6ea92f27337224f1f0
8ce98,PodSandboxId:49f246cdc6803065b80908875edf15d61deb53669a4a3d1aee2633163fb2ebba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1766347687948902995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4291be3-1c09-465a-9574-d7d70f9846bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da242b28631cc30a2b54ade27c63a355c8f1c2c36743ca5b04c900baae24111a,PodSand
boxId:b9c953e28fbfe93be0b4448ca15a52a23a3aac3d4f4c4ebfe1dfaacf9696f3b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1766347687976488684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-ntzl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e29723-ca1b-4f92-b6a2-a1679d5a2816,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"
containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:304c649f30afc1929e74ea3be4e66b048fee5d7cf6cbf1c0b3f467d95f8bdb32,PodSandboxId:80613639237f2962194d8a35562ca1ac9ac0e5f68531ce2077f471ea5b7e2e14,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1766347683335999263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6smpp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03096d0a-24b6-4db9-a07d-3ca48f199450,},Annotations:map[string]string{io.kubernetes.container.hash:
d6e0e1a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b9a10432063f6dafdfb270a0dcb791706dac472b675f79994e50ae8dc7f25e,PodSandboxId:5a95cd1a128c8df2d934a295e2f9e956cc9c7309ec9cec1f91565d6efd3843be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1766347683284473882,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994b1850d250d98450256a10cffe058b,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5533b53ecc8fa1d55a8620ddb265f19587a3ec2a4dd79ea29d2caf054b90bf0,PodSandboxId:78470c1e28a0ce52d630fecbe35e9fd98d89e5eedb5b63cd55007930068a4a7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1766347683240390326,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50
c2bfc932143776fe5aeab49800dea8,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c885fa9516caf6fc0fd470ba7fee82b76c4f0bd2ce66620d319bdadad89f6399,PodSandboxId:e28604768148b056f8e3b02f4cb80458c2adfa189a84d9ccd28ad83fa445d8ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1766347683146173787,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.po
d.name: kube-scheduler-functional-089730,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347c3e9dc7e9e588dc5fd1feb68add2f,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7541572-9a3c-49b3-ace2-4e3c0a110c41 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                         CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e6b81a5792ff6       04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5                                              8 minutes ago       Running             myfrontend                0                   e3ed1ff78915a       sp-pod                                      default
	7c98b5b625233       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e           8 minutes ago       Exited              mount-munger              0                   fef0553634746       busybox-mount                               default
	f33f3b6125f3d       public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036   9 minutes ago       Running             mysql                     0                   7d9e0b536cc6f       mysql-7d7b65bc95-9r6m2                      default
	30476b1b24d7e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              10 minutes ago      Running             storage-provisioner       3                   8b268f461c72e       storage-provisioner                         kube-system
	d88ac7ce60607       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                              10 minutes ago      Running             coredns                   2                   4de5a4cac2270       coredns-7d764666f9-ntzl5                    kube-system
	ecbe91e48af3a       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                              10 minutes ago      Running             kube-proxy                2                   11c5b1a74e282       kube-proxy-6smpp                            kube-system
	f929c086126ff       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                              10 minutes ago      Running             kube-apiserver            0                   04cd4698199fb       kube-apiserver-functional-089730            kube-system
	4198aac42042d       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                              10 minutes ago      Running             kube-controller-manager   2                   ef8159a89b543       kube-controller-manager-functional-089730   kube-system
	a65f0a39880a3       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                              10 minutes ago      Running             kube-scheduler            2                   e6cf51b236045       kube-scheduler-functional-089730            kube-system
	89341fc9df43d       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                              10 minutes ago      Running             etcd                      2                   551acf1af3dc1       etcd-functional-089730                      kube-system
	da242b28631cc       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                              11 minutes ago      Exited              coredns                   1                   b9c953e28fbfe       coredns-7d764666f9-ntzl5                    kube-system
	135b8186acfdf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              11 minutes ago      Exited              storage-provisioner       2                   49f246cdc6803       storage-provisioner                         kube-system
	304c649f30afc       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                              11 minutes ago      Exited              kube-proxy                1                   80613639237f2       kube-proxy-6smpp                            kube-system
	88b9a10432063       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                              11 minutes ago      Exited              kube-controller-manager   1                   5a95cd1a128c8       kube-controller-manager-functional-089730   kube-system
	e5533b53ecc8f       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                              11 minutes ago      Exited              etcd                      1                   78470c1e28a0c       etcd-functional-089730                      kube-system
	c885fa9516caf       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                              11 minutes ago      Exited              kube-scheduler            1                   e28604768148b       kube-scheduler-functional-089730            kube-system
	
	
	==> coredns [d88ac7ce6060714f83c1c22c5fdcfe49c4361814cdea91996f3475738b56891c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:36357 - 59019 "HINFO IN 6686439616755241019.5601899628486953117. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017389489s
	
	
	==> coredns [da242b28631cc30a2b54ade27c63a355c8f1c2c36743ca5b04c900baae24111a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:55070 - 63953 "HINFO IN 8837462380039339189.797622655870743556. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.027717276s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-089730
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-089730
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=functional-089730
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T20_07_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 20:07:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-089730
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 20:19:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 20:15:49 +0000   Sun, 21 Dec 2025 20:07:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 20:15:49 +0000   Sun, 21 Dec 2025 20:07:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 20:15:49 +0000   Sun, 21 Dec 2025 20:07:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Dec 2025 20:15:49 +0000   Sun, 21 Dec 2025 20:07:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.143
	  Hostname:    functional-089730
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	System Info:
	  Machine ID:                 b7924d9793944d52ae99ab871371d6d3
	  System UUID:                b7924d97-9394-4d52-ae99-ab871371d6d3
	  Boot ID:                    57f5eda2-a2ef-4802-8d47-5aa4113384a4
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-cb7fv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-9f67c86d4-twx5b            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-7d7b65bc95-9r6m2                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 coredns-7d764666f9-ntzl5                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     11m
	  kube-system                 etcd-functional-089730                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-089730              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-089730     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-6smpp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-089730              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-ntrf6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m20s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-txnj9          0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  11m   node-controller  Node functional-089730 event: Registered Node functional-089730 in Controller
	  Normal  RegisteredNode  11m   node-controller  Node functional-089730 event: Registered Node functional-089730 in Controller
	  Normal  RegisteredNode  10m   node-controller  Node functional-089730 event: Registered Node functional-089730 in Controller
	
	
	==> dmesg <==
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001271] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000307] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.192154] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.088656] kauditd_printk_skb: 1 callbacks suppressed
	[Dec21 20:07] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.131768] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.033488] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.027779] kauditd_printk_skb: 251 callbacks suppressed
	[ +28.266386] kauditd_printk_skb: 39 callbacks suppressed
	[Dec21 20:08] kauditd_printk_skb: 358 callbacks suppressed
	[  +6.797987] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.103594] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.741724] kauditd_printk_skb: 418 callbacks suppressed
	[  +1.465278] kauditd_printk_skb: 131 callbacks suppressed
	[Dec21 20:09] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.000104] kauditd_printk_skb: 110 callbacks suppressed
	[  +0.000071] kauditd_printk_skb: 47 callbacks suppressed
	[Dec21 20:10] kauditd_printk_skb: 26 callbacks suppressed
	[  +2.299023] kauditd_printk_skb: 11 callbacks suppressed
	[  +0.768541] kauditd_printk_skb: 43 callbacks suppressed
	[  +2.545214] crun[10569]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +2.443703] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [89341fc9df43d6401a41e0c78827ece223622180ce5a56c9f2c9aba7b025ed83] <==
	{"level":"info","ts":"2025-12-21T20:09:21.023644Z","caller":"traceutil/trace.go:172","msg":"trace[1696225313] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:742; }","duration":"446.160193ms","start":"2025-12-21T20:09:20.577479Z","end":"2025-12-21T20:09:21.023639Z","steps":["trace[1696225313] 'agreement among raft nodes before linearized reading'  (duration: 446.085479ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T20:09:21.023663Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-21T20:09:20.577462Z","time spent":"446.196892ms","remote":"127.0.0.1:53250","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-12-21T20:09:21.024483Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"205.622009ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T20:09:21.024605Z","caller":"traceutil/trace.go:172","msg":"trace[1575988518] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:743; }","duration":"205.747249ms","start":"2025-12-21T20:09:20.818850Z","end":"2025-12-21T20:09:21.024597Z","steps":["trace[1575988518] 'agreement among raft nodes before linearized reading'  (duration: 205.600977ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T20:09:21.024715Z","caller":"traceutil/trace.go:172","msg":"trace[1738677528] transaction","detail":"{read_only:false; response_revision:743; number_of_response:1; }","duration":"496.090127ms","start":"2025-12-21T20:09:20.528617Z","end":"2025-12-21T20:09:21.024707Z","steps":["trace[1738677528] 'process raft request'  (duration: 495.568338ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T20:09:21.025573Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"195.626276ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T20:09:21.025703Z","caller":"traceutil/trace.go:172","msg":"trace[430887472] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:743; }","duration":"195.755456ms","start":"2025-12-21T20:09:20.829940Z","end":"2025-12-21T20:09:21.025696Z","steps":["trace[430887472] 'agreement among raft nodes before linearized reading'  (duration: 195.610107ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T20:09:21.026563Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-21T20:09:20.528601Z","time spent":"496.207591ms","remote":"127.0.0.1:53208","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:742 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-12-21T20:09:25.277670Z","caller":"traceutil/trace.go:172","msg":"trace[443839155] linearizableReadLoop","detail":"{readStateIndex:841; appliedIndex:841; }","duration":"224.483292ms","start":"2025-12-21T20:09:25.053162Z","end":"2025-12-21T20:09:25.277645Z","steps":["trace[443839155] 'read index received'  (duration: 224.477273ms)","trace[443839155] 'applied index is now lower than readState.Index'  (duration: 3.592µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T20:09:25.277823Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"224.655352ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T20:09:25.277868Z","caller":"traceutil/trace.go:172","msg":"trace[1959271698] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:756; }","duration":"224.72507ms","start":"2025-12-21T20:09:25.053136Z","end":"2025-12-21T20:09:25.277861Z","steps":["trace[1959271698] 'agreement among raft nodes before linearized reading'  (duration: 224.606041ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T20:09:25.278312Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"202.750035ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T20:09:25.278585Z","caller":"traceutil/trace.go:172","msg":"trace[1705658733] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:757; }","duration":"203.084293ms","start":"2025-12-21T20:09:25.075492Z","end":"2025-12-21T20:09:25.278577Z","steps":["trace[1705658733] 'agreement among raft nodes before linearized reading'  (duration: 202.726351ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T20:09:25.278488Z","caller":"traceutil/trace.go:172","msg":"trace[1845703208] transaction","detail":"{read_only:false; response_revision:757; number_of_response:1; }","duration":"234.83712ms","start":"2025-12-21T20:09:25.043639Z","end":"2025-12-21T20:09:25.278476Z","steps":["trace[1845703208] 'process raft request'  (duration: 234.315722ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T20:09:27.707255Z","caller":"traceutil/trace.go:172","msg":"trace[737712101] linearizableReadLoop","detail":"{readStateIndex:842; appliedIndex:842; }","duration":"378.500942ms","start":"2025-12-21T20:09:27.328735Z","end":"2025-12-21T20:09:27.707236Z","steps":["trace[737712101] 'read index received'  (duration: 378.47916ms)","trace[737712101] 'applied index is now lower than readState.Index'  (duration: 21.014µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T20:09:27.707371Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"378.620908ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T20:09:27.707434Z","caller":"traceutil/trace.go:172","msg":"trace[882783494] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:757; }","duration":"378.654868ms","start":"2025-12-21T20:09:27.328730Z","end":"2025-12-21T20:09:27.707385Z","steps":["trace[882783494] 'agreement among raft nodes before linearized reading'  (duration: 378.594657ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T20:09:27.707458Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-21T20:09:27.328711Z","time spent":"378.740731ms","remote":"127.0.0.1:53250","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-12-21T20:09:27.707551Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.314604ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T20:09:27.707587Z","caller":"traceutil/trace.go:172","msg":"trace[262742464] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:758; }","duration":"130.354394ms","start":"2025-12-21T20:09:27.577223Z","end":"2025-12-21T20:09:27.707577Z","steps":["trace[262742464] 'agreement among raft nodes before linearized reading'  (duration: 130.300714ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T20:09:27.708810Z","caller":"traceutil/trace.go:172","msg":"trace[1896059710] transaction","detail":"{read_only:false; response_revision:758; number_of_response:1; }","duration":"410.316439ms","start":"2025-12-21T20:09:27.298483Z","end":"2025-12-21T20:09:27.708799Z","steps":["trace[1896059710] 'process raft request'  (duration: 408.959291ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T20:09:27.710567Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-21T20:09:27.298466Z","time spent":"410.84239ms","remote":"127.0.0.1:53208","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:757 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-12-21T20:18:39.797359Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1098}
	{"level":"info","ts":"2025-12-21T20:18:39.825678Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1098,"took":"27.849155ms","hash":4270611789,"current-db-size-bytes":3559424,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":1630208,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-12-21T20:18:39.825750Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4270611789,"revision":1098,"compact-revision":-1}
	
	
	==> etcd [e5533b53ecc8fa1d55a8620ddb265f19587a3ec2a4dd79ea29d2caf054b90bf0] <==
	{"level":"info","ts":"2025-12-21T20:08:05.500351Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-21T20:08:05.501341Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-21T20:08:05.501483Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-21T20:08:05.501513Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-21T20:08:05.502263Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-21T20:08:05.504176Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-21T20:08:05.503374Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.143:2379"}
	{"level":"info","ts":"2025-12-21T20:08:27.252671Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-21T20:08:27.265612Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-089730","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.143:2380"],"advertise-client-urls":["https://192.168.39.143:2379"]}
	{"level":"error","ts":"2025-12-21T20:08:27.265814Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-21T20:08:27.339548Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-21T20:08:27.339603Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-21T20:08:27.339631Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"be0eebdc09990bfd","current-leader-member-id":"be0eebdc09990bfd"}
	{"level":"info","ts":"2025-12-21T20:08:27.339715Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-21T20:08:27.339740Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-21T20:08:27.339735Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-21T20:08:27.339787Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-21T20:08:27.339829Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-21T20:08:27.339861Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.143:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-21T20:08:27.339871Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.143:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-21T20:08:27.339876Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.143:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-21T20:08:27.343710Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.143:2380"}
	{"level":"error","ts":"2025-12-21T20:08:27.343851Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.143:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-21T20:08:27.343933Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.143:2380"}
	{"level":"info","ts":"2025-12-21T20:08:27.343942Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-089730","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.143:2380"],"advertise-client-urls":["https://192.168.39.143:2379"]}
	
	
	==> kernel <==
	 20:19:10 up 12 min,  0 users,  load average: 0.32, 0.32, 0.23
	Linux functional-089730 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Dec 20 21:36:01 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [f929c086126ff2686e21af271eb411c613ada14b105a0acd309a6a6704738e88] <==
	I1221 20:08:41.330107       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1221 20:08:41.339689       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:08:42.083889       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1221 20:08:42.228764       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1221 20:08:42.940605       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1221 20:08:42.994086       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1221 20:08:43.028477       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 20:08:43.036612       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1221 20:08:44.715497       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 20:08:44.822757       1 controller.go:667] quota admission added evaluator for: endpoints
	I1221 20:08:44.864858       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1221 20:09:02.266182       1 alloc.go:329] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.14.224"}
	I1221 20:09:07.276308       1 alloc.go:329] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.97.71.41"}
	I1221 20:09:07.809284       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.60.116"}
	I1221 20:09:08.571112       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.250.20"}
	E1221 20:09:29.493012       1 conn.go:339] Error on socket receive: read tcp 192.168.39.143:8441->192.168.39.1:39346: use of closed network connection
	E1221 20:09:31.106098       1 conn.go:339] Error on socket receive: read tcp 192.168.39.143:8441->192.168.39.1:39366: use of closed network connection
	E1221 20:09:33.052884       1 conn.go:339] Error on socket receive: read tcp 192.168.39.143:8441->192.168.39.1:39394: use of closed network connection
	E1221 20:09:35.655217       1 conn.go:339] Error on socket receive: read tcp 192.168.39.143:8441->192.168.39.1:38216: use of closed network connection
	E1221 20:10:47.193767       1 conn.go:339] Error on socket receive: read tcp 192.168.39.143:8441->192.168.39.1:44022: use of closed network connection
	I1221 20:10:49.858201       1 controller.go:667] quota admission added evaluator for: namespaces
	I1221 20:10:50.107230       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.47.52"}
	I1221 20:10:50.132630       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.234.203"}
	E1221 20:10:54.508953       1 conn.go:339] Error on socket receive: read tcp 192.168.39.143:8441->192.168.39.1:60388: use of closed network connection
	I1221 20:18:41.241074       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [4198aac42042dc20d4cf3101e76a1d787833f1041837c2f44bed919ddc3884da] <==
	I1221 20:08:44.401515       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.401625       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.401630       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.401639       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.401718       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.401737       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.401752       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.401758       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.401763       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.400746       1 range_allocator.go:177] "Sending events to api server"
	I1221 20:08:44.422039       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1221 20:08:44.422118       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:08:44.422139       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.400762       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.447374       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.485911       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.502366       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:44.502512       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1221 20:08:44.502519       1 garbagecollector.go:169] "Proceeding to collect garbage"
	E1221 20:10:49.965515       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1221 20:10:49.972537       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1221 20:10:49.984381       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1221 20:10:49.986206       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1221 20:10:49.999925       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1221 20:10:50.000440       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [88b9a10432063f6dafdfb270a0dcb791706dac472b675f79994e50ae8dc7f25e] <==
	I1221 20:08:10.045355       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.047009       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.047039       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.047091       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.047588       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.047629       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.047663       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.047713       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.047760       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.047810       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.047871       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.047931       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.047959       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.048026       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.048079       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.054017       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.054042       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.054062       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.056992       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:08:10.086510       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.142819       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.143213       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1221 20:08:10.143223       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1221 20:08:10.157535       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:10.536220       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [304c649f30afc1929e74ea3be4e66b048fee5d7cf6cbf1c0b3f467d95f8bdb32] <==
	I1221 20:08:08.211810       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:08:08.312455       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:08.312509       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.143"]
	E1221 20:08:08.312570       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 20:08:08.350934       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1221 20:08:08.350996       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1221 20:08:08.351018       1 server_linux.go:136] "Using iptables Proxier"
	I1221 20:08:08.360472       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 20:08:08.360732       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1221 20:08:08.360761       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:08:08.365438       1 config.go:309] "Starting node config controller"
	I1221 20:08:08.365485       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 20:08:08.365492       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1221 20:08:08.365642       1 config.go:200] "Starting service config controller"
	I1221 20:08:08.365651       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 20:08:08.365665       1 config.go:106] "Starting endpoint slice config controller"
	I1221 20:08:08.365668       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 20:08:08.365678       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 20:08:08.365681       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 20:08:08.465793       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1221 20:08:08.465819       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1221 20:08:08.465866       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [ecbe91e48af3a0dc3c616569835aa1492708bc166db654a923e85b13b49c3c33] <==
	I1221 20:08:42.834580       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:08:42.935507       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:42.936022       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.143"]
	E1221 20:08:42.936712       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 20:08:42.998600       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1221 20:08:42.998716       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1221 20:08:42.998837       1 server_linux.go:136] "Using iptables Proxier"
	I1221 20:08:43.013895       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 20:08:43.014200       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1221 20:08:43.014454       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:08:43.019246       1 config.go:200] "Starting service config controller"
	I1221 20:08:43.019887       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 20:08:43.019947       1 config.go:106] "Starting endpoint slice config controller"
	I1221 20:08:43.019963       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 20:08:43.019984       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 20:08:43.019998       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 20:08:43.022523       1 config.go:309] "Starting node config controller"
	I1221 20:08:43.022563       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 20:08:43.022581       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1221 20:08:43.120838       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1221 20:08:43.121448       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1221 20:08:43.121527       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a65f0a39880a355c99a7a6703a4a8be4b27a1964d47ee979fdaac8e44fe37dbf] <==
	I1221 20:08:40.057223       1 serving.go:386] Generated self-signed cert in-memory
	W1221 20:08:41.161771       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1221 20:08:41.161865       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1221 20:08:41.161886       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1221 20:08:41.161904       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1221 20:08:41.257311       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1221 20:08:41.258205       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:08:41.275921       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 20:08:41.276002       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:08:41.276021       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1221 20:08:41.276004       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1221 20:08:41.377165       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [c885fa9516caf6fc0fd470ba7fee82b76c4f0bd2ce66620d319bdadad89f6399] <==
	I1221 20:08:05.819836       1 serving.go:386] Generated self-signed cert in-memory
	I1221 20:08:06.912628       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1221 20:08:06.912811       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:08:06.918498       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1221 20:08:06.918580       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:08:06.918588       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1221 20:08:06.918625       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 20:08:06.918633       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:08:06.918645       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1221 20:08:06.918654       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:08:06.918722       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1221 20:08:07.019711       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:07.019896       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:07.019990       1 shared_informer.go:377] "Caches are synced"
	I1221 20:08:27.280032       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1221 20:08:27.282871       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1221 20:08:27.282931       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1221 20:08:27.282947       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 20:08:27.282968       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1221 20:08:27.288165       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1221 20:08:27.288690       1 server.go:265] "[graceful-termination] secure server is exiting"
	
	
	==> kubelet <==
	Dec 21 20:18:38 functional-089730 kubelet[6456]: E1221 20:18:38.349755    6456 manager.go:1119] Failed to create existing container: /kubepods/burstable/poda2e29723-ca1b-4f92-b6a2-a1679d5a2816/crio-b9c953e28fbfe93be0b4448ca15a52a23a3aac3d4f4c4ebfe1dfaacf9696f3b2: Error finding container b9c953e28fbfe93be0b4448ca15a52a23a3aac3d4f4c4ebfe1dfaacf9696f3b2: Status 404 returned error can't find the container with id b9c953e28fbfe93be0b4448ca15a52a23a3aac3d4f4c4ebfe1dfaacf9696f3b2
	Dec 21 20:18:38 functional-089730 kubelet[6456]: E1221 20:18:38.350385    6456 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod347c3e9dc7e9e588dc5fd1feb68add2f/crio-e28604768148b056f8e3b02f4cb80458c2adfa189a84d9ccd28ad83fa445d8ef: Error finding container e28604768148b056f8e3b02f4cb80458c2adfa189a84d9ccd28ad83fa445d8ef: Status 404 returned error can't find the container with id e28604768148b056f8e3b02f4cb80458c2adfa189a84d9ccd28ad83fa445d8ef
	Dec 21 20:18:38 functional-089730 kubelet[6456]: E1221 20:18:38.350682    6456 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod994b1850d250d98450256a10cffe058b/crio-5a95cd1a128c8df2d934a295e2f9e956cc9c7309ec9cec1f91565d6efd3843be: Error finding container 5a95cd1a128c8df2d934a295e2f9e956cc9c7309ec9cec1f91565d6efd3843be: Status 404 returned error can't find the container with id 5a95cd1a128c8df2d934a295e2f9e956cc9c7309ec9cec1f91565d6efd3843be
	Dec 21 20:18:38 functional-089730 kubelet[6456]: E1221 20:18:38.351014    6456 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod50c2bfc932143776fe5aeab49800dea8/crio-78470c1e28a0ce52d630fecbe35e9fd98d89e5eedb5b63cd55007930068a4a7f: Error finding container 78470c1e28a0ce52d630fecbe35e9fd98d89e5eedb5b63cd55007930068a4a7f: Status 404 returned error can't find the container with id 78470c1e28a0ce52d630fecbe35e9fd98d89e5eedb5b63cd55007930068a4a7f
	Dec 21 20:18:38 functional-089730 kubelet[6456]: E1221 20:18:38.351380    6456 manager.go:1119] Failed to create existing container: /kubepods/besteffort/podf4291be3-1c09-465a-9574-d7d70f9846bf/crio-49f246cdc6803065b80908875edf15d61deb53669a4a3d1aee2633163fb2ebba: Error finding container 49f246cdc6803065b80908875edf15d61deb53669a4a3d1aee2633163fb2ebba: Status 404 returned error can't find the container with id 49f246cdc6803065b80908875edf15d61deb53669a4a3d1aee2633163fb2ebba
	Dec 21 20:18:38 functional-089730 kubelet[6456]: E1221 20:18:38.351827    6456 manager.go:1119] Failed to create existing container: /kubepods/besteffort/pod03096d0a-24b6-4db9-a07d-3ca48f199450/crio-80613639237f2962194d8a35562ca1ac9ac0e5f68531ce2077f471ea5b7e2e14: Error finding container 80613639237f2962194d8a35562ca1ac9ac0e5f68531ce2077f471ea5b7e2e14: Status 404 returned error can't find the container with id 80613639237f2962194d8a35562ca1ac9ac0e5f68531ce2077f471ea5b7e2e14
	Dec 21 20:18:38 functional-089730 kubelet[6456]: E1221 20:18:38.511084    6456 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766348318510388709  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242160}  inodes_used:{value:113}}"
	Dec 21 20:18:38 functional-089730 kubelet[6456]: E1221 20:18:38.511121    6456 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766348318510388709  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242160}  inodes_used:{value:113}}"
	Dec 21 20:18:44 functional-089730 kubelet[6456]: E1221 20:18:44.212034    6456 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-089730" containerName="kube-scheduler"
	Dec 21 20:18:48 functional-089730 kubelet[6456]: E1221 20:18:48.514952    6456 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766348328514371915  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242160}  inodes_used:{value:113}}"
	Dec 21 20:18:48 functional-089730 kubelet[6456]: E1221 20:18:48.515161    6456 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766348328514371915  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242160}  inodes_used:{value:113}}"
	Dec 21 20:18:49 functional-089730 kubelet[6456]: E1221 20:18:49.211775    6456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-twx5b" podUID="b4244c12-9087-4028-9616-2bec16ef9155"
	Dec 21 20:18:51 functional-089730 kubelet[6456]: E1221 20:18:51.210634    6456 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-txnj9" containerName="kubernetes-dashboard"
	Dec 21 20:18:51 functional-089730 kubelet[6456]: E1221 20:18:51.215509    6456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-txnj9" podUID="64f9206b-1692-4ba4-9ece-aa4a73035f95"
	Dec 21 20:18:58 functional-089730 kubelet[6456]: E1221 20:18:58.516350    6456 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766348338516108143  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242160}  inodes_used:{value:113}}"
	Dec 21 20:18:58 functional-089730 kubelet[6456]: E1221 20:18:58.516369    6456 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766348338516108143  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242160}  inodes_used:{value:113}}"
	Dec 21 20:19:02 functional-089730 kubelet[6456]: E1221 20:19:02.211479    6456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-twx5b" podUID="b4244c12-9087-4028-9616-2bec16ef9155"
	Dec 21 20:19:02 functional-089730 kubelet[6456]: E1221 20:19:02.557792    6456 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 21 20:19:02 functional-089730 kubelet[6456]: E1221 20:19:02.557836    6456 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 21 20:19:02 functional-089730 kubelet[6456]: E1221 20:19:02.558240    6456 kuberuntime_manager.go:1664] "Unhandled Error" err="container echo-server start failed in pod hello-node-5758569b79-cb7fv_default(31efe17a-06f9-4507-ad66-b645f379b8ad): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 21 20:19:02 functional-089730 kubelet[6456]: E1221 20:19:02.558275    6456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-cb7fv" podUID="31efe17a-06f9-4507-ad66-b645f379b8ad"
	Dec 21 20:19:06 functional-089730 kubelet[6456]: E1221 20:19:06.210740    6456 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-txnj9" containerName="kubernetes-dashboard"
	Dec 21 20:19:06 functional-089730 kubelet[6456]: E1221 20:19:06.212984    6456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-txnj9" podUID="64f9206b-1692-4ba4-9ece-aa4a73035f95"
	Dec 21 20:19:08 functional-089730 kubelet[6456]: E1221 20:19:08.520180    6456 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766348348519487357  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242160}  inodes_used:{value:113}}"
	Dec 21 20:19:08 functional-089730 kubelet[6456]: E1221 20:19:08.520211    6456 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766348348519487357  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242160}  inodes_used:{value:113}}"
	
	
	==> storage-provisioner [135b8186acfdfbe3d6f2ba492af723f3eb852911f6ea92f27337224f1f08ce98] <==
	I1221 20:08:08.191974       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1221 20:08:08.214171       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1221 20:08:08.214258       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1221 20:08:08.226136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:08:11.682054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:08:15.948012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:08:19.546310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:08:22.600980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:08:25.623361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:08:25.634000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1221 20:08:25.634669       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1221 20:08:25.634852       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-089730_abbefef7-221a-4763-abeb-b90ce68acc12!
	I1221 20:08:25.635037       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"47f7b129-41c7-4aa9-9e55-5611e4d8b123", APIVersion:"v1", ResourceVersion:"534", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-089730_abbefef7-221a-4763-abeb-b90ce68acc12 became leader
	W1221 20:08:25.644548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:08:25.652428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1221 20:08:25.735891       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-089730_abbefef7-221a-4763-abeb-b90ce68acc12!
	
	
	==> storage-provisioner [30476b1b24d7e055027c37f6876dd2b3a7167af34d2bc5f1496846ec0d3fbf1f] <==
	W1221 20:18:44.815189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:18:46.820619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:18:46.828689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:18:48.831978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:18:48.840077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:18:50.843482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:18:50.853070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:18:52.856534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:18:52.862535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:18:54.865939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:18:54.874684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:18:56.878154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:18:56.884025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:18:58.888349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:18:58.894456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:19:00.899155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:19:00.907846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:19:02.911498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:19:02.916805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:19:04.920688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:19:04.929332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:19:06.932887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:19:06.938297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:19:08.941986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:19:08.951708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-089730 -n functional-089730
helpers_test.go:270: (dbg) Run:  kubectl --context functional-089730 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-5758569b79-cb7fv hello-node-connect-9f67c86d4-twx5b dashboard-metrics-scraper-5565989548-ntrf6 kubernetes-dashboard-b84665fb8-txnj9
helpers_test.go:283: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-089730 describe pod busybox-mount hello-node-5758569b79-cb7fv hello-node-connect-9f67c86d4-twx5b dashboard-metrics-scraper-5565989548-ntrf6 kubernetes-dashboard-b84665fb8-txnj9
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-089730 describe pod busybox-mount hello-node-5758569b79-cb7fv hello-node-connect-9f67c86d4-twx5b dashboard-metrics-scraper-5565989548-ntrf6 kubernetes-dashboard-b84665fb8-txnj9: exit status 1 (105.76643ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-089730/192.168.39.143
	Start Time:       Sun, 21 Dec 2025 20:09:44 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  cri-o://7c98b5b625233e8a9da1bf14b1c6bbadf84fac4c441dfe39fdae8f01d350ab86
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 21 Dec 2025 20:10:42 +0000
	      Finished:     Sun, 21 Dec 2025 20:10:42 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8vxst (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-8vxst:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m27s  default-scheduler  Successfully assigned default/busybox-mount to functional-089730
	  Normal  Pulling    9m27s  kubelet            spec.containers{mount-munger}: Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     8m29s  kubelet            spec.containers{mount-munger}: Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.276s (57.938s including waiting). Image size: 4631262 bytes.
	  Normal  Created    8m29s  kubelet            spec.containers{mount-munger}: Container created
	  Normal  Started    8m29s  kubelet            spec.containers{mount-munger}: Container started
	
	
	Name:             hello-node-5758569b79-cb7fv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-089730/192.168.39.143
	Start Time:       Sun, 21 Dec 2025 20:09:07 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vfkxv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vfkxv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-5758569b79-cb7fv to functional-089730
	  Warning  Failed     9m5s                   kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m16s (x2 over 7m59s)  kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m (x10 over 9m4s)     kubelet            spec.containers{echo-server}: Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     2m (x10 over 9m4s)     kubelet            spec.containers{echo-server}: Error: ImagePullBackOff
	  Normal   Pulling    108s (x5 over 10m)     kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"
	  Warning  Failed     9s (x5 over 9m5s)      kubelet            spec.containers{echo-server}: Error: ErrImagePull
	  Warning  Failed     9s (x2 over 5m22s)     kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	
	
	Name:             hello-node-connect-9f67c86d4-twx5b
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-089730/192.168.39.143
	Start Time:       Sun, 21 Dec 2025 20:09:08 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fkn52 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-fkn52:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-twx5b to functional-089730
	  Warning  Failed     3m46s                kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m5s (x4 over 10m)   kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"
	  Warning  Failed     75s (x3 over 8m35s)  kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     75s (x4 over 8m35s)  kubelet            spec.containers{echo-server}: Error: ErrImagePull
	  Normal   BackOff    9s (x9 over 8m34s)   kubelet            spec.containers{echo-server}: Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     9s (x9 over 8m34s)   kubelet            spec.containers{echo-server}: Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-ntrf6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-txnj9" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-089730 describe pod busybox-mount hello-node-5758569b79-cb7fv hello-node-connect-9f67c86d4-twx5b dashboard-metrics-scraper-5565989548-ntrf6 kubernetes-dashboard-b84665fb8-txnj9: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (602.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (600.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-089730 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-089730 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-cb7fv" [31efe17a-06f9-4507-ad66-b645f379b8ad] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-089730 -n functional-089730
functional_test.go:1460: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-21 20:19:08.070050002 +0000 UTC m=+1976.907841849
functional_test.go:1460: (dbg) Run:  kubectl --context functional-089730 describe po hello-node-5758569b79-cb7fv -n default
functional_test.go:1460: (dbg) kubectl --context functional-089730 describe po hello-node-5758569b79-cb7fv -n default:
Name:             hello-node-5758569b79-cb7fv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-089730/192.168.39.143
Start Time:       Sun, 21 Dec 2025 20:09:07 +0000
Labels:           app=hello-node
pod-template-hash=5758569b79
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-5758569b79
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vfkxv (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-vfkxv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-5758569b79-cb7fv to functional-089730
Warning  Failed     9m2s                   kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m13s (x2 over 7m56s)  kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    117s (x10 over 9m1s)   kubelet            spec.containers{echo-server}: Back-off pulling image "kicbase/echo-server"
Warning  Failed     117s (x10 over 9m1s)   kubelet            spec.containers{echo-server}: Error: ImagePullBackOff
Normal   Pulling    105s (x5 over 10m)     kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"
Warning  Failed     6s (x5 over 9m2s)      kubelet            spec.containers{echo-server}: Error: ErrImagePull
Warning  Failed     6s (x2 over 5m19s)     kubelet            spec.containers{echo-server}: Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
functional_test.go:1460: (dbg) Run:  kubectl --context functional-089730 logs hello-node-5758569b79-cb7fv -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-089730 logs hello-node-5758569b79-cb7fv -n default: exit status 1 (75.014457ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-5758569b79-cb7fv" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-089730 logs hello-node-5758569b79-cb7fv -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (600.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-089730 service --namespace=default --https --url hello-node: exit status 115 (285.676158ms)

                                                
                                                
-- stdout --
	https://192.168.39.143:31950
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_status_8980859c28362053cbc8940f41f258f108f0854e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-089730 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-089730 service hello-node --url --format={{.IP}}: exit status 115 (275.732856ms)

                                                
                                                
-- stdout --
	192.168.39.143
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-089730 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-089730 service hello-node --url: exit status 115 (252.966098ms)

                                                
                                                
-- stdout --
	http://192.168.39.143:31950
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-089730 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.143:31950
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.25s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (47.72s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:72: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-759510 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:72: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-759510 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (47.506371407s)
preload_test.go:77: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-759510 image list
preload_test.go:82: Expected to find public.ecr.aws/docker/library/busybox:latest in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.3
	registry.k8s.io/kube-proxy:v1.34.3
	registry.k8s.io/kube-controller-manager:v1.34.3
	registry.k8s.io/kube-apiserver:v1.34.3
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
--- FAIL: TestPreload/Restart-With-Preload-Check-User-Image (47.72s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (85.74s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-471447 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-471447 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m21.644952431s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-471447] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-471447" primary control-plane node in "pause-471447" cluster
	* Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-471447" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 21:04:35.163956  163580 out.go:360] Setting OutFile to fd 1 ...
	I1221 21:04:35.164067  163580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 21:04:35.164085  163580 out.go:374] Setting ErrFile to fd 2...
	I1221 21:04:35.164093  163580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 21:04:35.164281  163580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
	I1221 21:04:35.164738  163580 out.go:368] Setting JSON to false
	I1221 21:04:35.165680  163580 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":17225,"bootTime":1766333850,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 21:04:35.165741  163580 start.go:143] virtualization: kvm guest
	I1221 21:04:35.167796  163580 out.go:179] * [pause-471447] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 21:04:35.169328  163580 notify.go:221] Checking for updates...
	I1221 21:04:35.169336  163580 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 21:04:35.170802  163580 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 21:04:35.172267  163580 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	I1221 21:04:35.173566  163580 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	I1221 21:04:35.174979  163580 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 21:04:35.176463  163580 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 21:04:35.178498  163580 config.go:182] Loaded profile config "pause-471447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 21:04:35.179190  163580 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 21:04:35.213586  163580 out.go:179] * Using the kvm2 driver based on existing profile
	I1221 21:04:35.214956  163580 start.go:309] selected driver: kvm2
	I1221 21:04:35.214977  163580 start.go:928] validating driver "kvm2" against &{Name:pause-471447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.3 ClusterName:pause-471447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.123 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 21:04:35.215191  163580 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 21:04:35.216245  163580 cni.go:84] Creating CNI manager for ""
	I1221 21:04:35.216313  163580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1221 21:04:35.216364  163580 start.go:353] cluster config:
	{Name:pause-471447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-471447 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.123 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 21:04:35.216478  163580 iso.go:125] acquiring lock: {Name:mk32aed4917b82431a8f5160a35db6118385a2c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 21:04:35.218928  163580 out.go:179] * Starting "pause-471447" primary control-plane node in "pause-471447" cluster
	I1221 21:04:35.220694  163580 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 21:04:35.220739  163580 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-122429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1221 21:04:35.220764  163580 cache.go:65] Caching tarball of preloaded images
	I1221 21:04:35.220888  163580 preload.go:251] Found /home/jenkins/minikube-integration/22179-122429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 21:04:35.220908  163580 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1221 21:04:35.221064  163580 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/pause-471447/config.json ...
	I1221 21:04:35.221281  163580 start.go:360] acquireMachinesLock for pause-471447: {Name:mkd449b545e9165e82ce02652c0c22eb5894063b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1221 21:05:04.693627  163580 start.go:364] duration metric: took 29.472277085s to acquireMachinesLock for "pause-471447"
	I1221 21:05:04.693691  163580 start.go:96] Skipping create...Using existing machine configuration
	I1221 21:05:04.693700  163580 fix.go:54] fixHost starting: 
	I1221 21:05:04.696513  163580 fix.go:112] recreateIfNeeded on pause-471447: state=Running err=<nil>
	W1221 21:05:04.696553  163580 fix.go:138] unexpected machine state, will restart: <nil>
	I1221 21:05:04.700011  163580 out.go:252] * Updating the running kvm2 "pause-471447" VM ...
	I1221 21:05:04.700052  163580 machine.go:94] provisionDockerMachine start ...
	I1221 21:05:04.704343  163580 main.go:144] libmachine: domain pause-471447 has defined MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:04.704955  163580 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3a:f3:44", ip: ""} in network mk-pause-471447: {Iface:virbr6 ExpiryTime:2025-12-21 22:03:31 +0000 UTC Type:0 Mac:52:54:00:3a:f3:44 Iaid: IPaddr:192.168.94.123 Prefix:24 Hostname:pause-471447 Clientid:01:52:54:00:3a:f3:44}
	I1221 21:05:04.704990  163580 main.go:144] libmachine: domain pause-471447 has defined IP address 192.168.94.123 and MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:04.705325  163580 main.go:144] libmachine: Using SSH client type: native
	I1221 21:05:04.705644  163580 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.94.123 22 <nil> <nil>}
	I1221 21:05:04.705661  163580 main.go:144] libmachine: About to run SSH command:
	hostname
	I1221 21:05:04.827554  163580 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-471447
	
	I1221 21:05:04.827599  163580 buildroot.go:166] provisioning hostname "pause-471447"
	I1221 21:05:04.831682  163580 main.go:144] libmachine: domain pause-471447 has defined MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:04.832320  163580 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3a:f3:44", ip: ""} in network mk-pause-471447: {Iface:virbr6 ExpiryTime:2025-12-21 22:03:31 +0000 UTC Type:0 Mac:52:54:00:3a:f3:44 Iaid: IPaddr:192.168.94.123 Prefix:24 Hostname:pause-471447 Clientid:01:52:54:00:3a:f3:44}
	I1221 21:05:04.832359  163580 main.go:144] libmachine: domain pause-471447 has defined IP address 192.168.94.123 and MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:04.832610  163580 main.go:144] libmachine: Using SSH client type: native
	I1221 21:05:04.832870  163580 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.94.123 22 <nil> <nil>}
	I1221 21:05:04.832885  163580 main.go:144] libmachine: About to run SSH command:
	sudo hostname pause-471447 && echo "pause-471447" | sudo tee /etc/hostname
	I1221 21:05:04.961515  163580 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-471447
	
	I1221 21:05:04.964671  163580 main.go:144] libmachine: domain pause-471447 has defined MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:04.965108  163580 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3a:f3:44", ip: ""} in network mk-pause-471447: {Iface:virbr6 ExpiryTime:2025-12-21 22:03:31 +0000 UTC Type:0 Mac:52:54:00:3a:f3:44 Iaid: IPaddr:192.168.94.123 Prefix:24 Hostname:pause-471447 Clientid:01:52:54:00:3a:f3:44}
	I1221 21:05:04.965138  163580 main.go:144] libmachine: domain pause-471447 has defined IP address 192.168.94.123 and MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:04.965370  163580 main.go:144] libmachine: Using SSH client type: native
	I1221 21:05:04.965647  163580 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.94.123 22 <nil> <nil>}
	I1221 21:05:04.965664  163580 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-471447' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-471447/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-471447' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 21:05:05.076985  163580 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 21:05:05.077016  163580 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22179-122429/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-122429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-122429/.minikube}
	I1221 21:05:05.077058  163580 buildroot.go:174] setting up certificates
	I1221 21:05:05.077070  163580 provision.go:84] configureAuth start
	I1221 21:05:05.080662  163580 main.go:144] libmachine: domain pause-471447 has defined MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:05.081147  163580 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3a:f3:44", ip: ""} in network mk-pause-471447: {Iface:virbr6 ExpiryTime:2025-12-21 22:03:31 +0000 UTC Type:0 Mac:52:54:00:3a:f3:44 Iaid: IPaddr:192.168.94.123 Prefix:24 Hostname:pause-471447 Clientid:01:52:54:00:3a:f3:44}
	I1221 21:05:05.081177  163580 main.go:144] libmachine: domain pause-471447 has defined IP address 192.168.94.123 and MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:05.084103  163580 main.go:144] libmachine: domain pause-471447 has defined MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:05.084590  163580 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3a:f3:44", ip: ""} in network mk-pause-471447: {Iface:virbr6 ExpiryTime:2025-12-21 22:03:31 +0000 UTC Type:0 Mac:52:54:00:3a:f3:44 Iaid: IPaddr:192.168.94.123 Prefix:24 Hostname:pause-471447 Clientid:01:52:54:00:3a:f3:44}
	I1221 21:05:05.084626  163580 main.go:144] libmachine: domain pause-471447 has defined IP address 192.168.94.123 and MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:05.084812  163580 provision.go:143] copyHostCerts
	I1221 21:05:05.084896  163580 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-122429/.minikube/ca.pem, removing ...
	I1221 21:05:05.084917  163580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-122429/.minikube/ca.pem
	I1221 21:05:05.085010  163580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-122429/.minikube/ca.pem (1082 bytes)
	I1221 21:05:05.085174  163580 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-122429/.minikube/cert.pem, removing ...
	I1221 21:05:05.085191  163580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-122429/.minikube/cert.pem
	I1221 21:05:05.085230  163580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-122429/.minikube/cert.pem (1123 bytes)
	I1221 21:05:05.085322  163580 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-122429/.minikube/key.pem, removing ...
	I1221 21:05:05.085335  163580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-122429/.minikube/key.pem
	I1221 21:05:05.085371  163580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-122429/.minikube/key.pem (1679 bytes)
	I1221 21:05:05.085444  163580 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-122429/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca-key.pem org=jenkins.pause-471447 san=[127.0.0.1 192.168.94.123 localhost minikube pause-471447]
	I1221 21:05:05.135411  163580 provision.go:177] copyRemoteCerts
	I1221 21:05:05.135494  163580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 21:05:05.138547  163580 main.go:144] libmachine: domain pause-471447 has defined MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:05.138994  163580 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3a:f3:44", ip: ""} in network mk-pause-471447: {Iface:virbr6 ExpiryTime:2025-12-21 22:03:31 +0000 UTC Type:0 Mac:52:54:00:3a:f3:44 Iaid: IPaddr:192.168.94.123 Prefix:24 Hostname:pause-471447 Clientid:01:52:54:00:3a:f3:44}
	I1221 21:05:05.139017  163580 main.go:144] libmachine: domain pause-471447 has defined IP address 192.168.94.123 and MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:05.139200  163580 sshutil.go:53] new ssh client: &{IP:192.168.94.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/pause-471447/id_rsa Username:docker}
	I1221 21:05:05.230663  163580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1221 21:05:05.272593  163580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1221 21:05:05.310742  163580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1221 21:05:05.346220  163580 provision.go:87] duration metric: took 269.131665ms to configureAuth
	I1221 21:05:05.346257  163580 buildroot.go:189] setting minikube options for container-runtime
	I1221 21:05:05.346538  163580 config.go:182] Loaded profile config "pause-471447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 21:05:05.350343  163580 main.go:144] libmachine: domain pause-471447 has defined MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:05.350905  163580 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3a:f3:44", ip: ""} in network mk-pause-471447: {Iface:virbr6 ExpiryTime:2025-12-21 22:03:31 +0000 UTC Type:0 Mac:52:54:00:3a:f3:44 Iaid: IPaddr:192.168.94.123 Prefix:24 Hostname:pause-471447 Clientid:01:52:54:00:3a:f3:44}
	I1221 21:05:05.350947  163580 main.go:144] libmachine: domain pause-471447 has defined IP address 192.168.94.123 and MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:05.351220  163580 main.go:144] libmachine: Using SSH client type: native
	I1221 21:05:05.351567  163580 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.94.123 22 <nil> <nil>}
	I1221 21:05:05.351606  163580 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 21:05:10.975442  163580 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 21:05:10.975517  163580 machine.go:97] duration metric: took 6.275427643s to provisionDockerMachine
	I1221 21:05:10.975535  163580 start.go:293] postStartSetup for "pause-471447" (driver="kvm2")
	I1221 21:05:10.975552  163580 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 21:05:10.975641  163580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 21:05:10.978867  163580 main.go:144] libmachine: domain pause-471447 has defined MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:10.979348  163580 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3a:f3:44", ip: ""} in network mk-pause-471447: {Iface:virbr6 ExpiryTime:2025-12-21 22:03:31 +0000 UTC Type:0 Mac:52:54:00:3a:f3:44 Iaid: IPaddr:192.168.94.123 Prefix:24 Hostname:pause-471447 Clientid:01:52:54:00:3a:f3:44}
	I1221 21:05:10.979385  163580 main.go:144] libmachine: domain pause-471447 has defined IP address 192.168.94.123 and MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:10.979622  163580 sshutil.go:53] new ssh client: &{IP:192.168.94.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/pause-471447/id_rsa Username:docker}
	I1221 21:05:11.064589  163580 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 21:05:11.070043  163580 info.go:137] Remote host: Buildroot 2025.02
	I1221 21:05:11.070084  163580 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-122429/.minikube/addons for local assets ...
	I1221 21:05:11.070142  163580 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-122429/.minikube/files for local assets ...
	I1221 21:05:11.070214  163580 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-122429/.minikube/files/etc/ssl/certs/1263452.pem -> 1263452.pem in /etc/ssl/certs
	I1221 21:05:11.070299  163580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 21:05:11.082890  163580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/files/etc/ssl/certs/1263452.pem --> /etc/ssl/certs/1263452.pem (1708 bytes)
	I1221 21:05:11.118708  163580 start.go:296] duration metric: took 143.149809ms for postStartSetup
	I1221 21:05:11.118774  163580 fix.go:56] duration metric: took 6.425074211s for fixHost
	I1221 21:05:11.122068  163580 main.go:144] libmachine: domain pause-471447 has defined MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:11.122568  163580 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3a:f3:44", ip: ""} in network mk-pause-471447: {Iface:virbr6 ExpiryTime:2025-12-21 22:03:31 +0000 UTC Type:0 Mac:52:54:00:3a:f3:44 Iaid: IPaddr:192.168.94.123 Prefix:24 Hostname:pause-471447 Clientid:01:52:54:00:3a:f3:44}
	I1221 21:05:11.122591  163580 main.go:144] libmachine: domain pause-471447 has defined IP address 192.168.94.123 and MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:11.122852  163580 main.go:144] libmachine: Using SSH client type: native
	I1221 21:05:11.123176  163580 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.94.123 22 <nil> <nil>}
	I1221 21:05:11.123190  163580 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1221 21:05:11.229669  163580 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766351111.223605288
	
	I1221 21:05:11.229698  163580 fix.go:216] guest clock: 1766351111.223605288
	I1221 21:05:11.229708  163580 fix.go:229] Guest: 2025-12-21 21:05:11.223605288 +0000 UTC Remote: 2025-12-21 21:05:11.118779507 +0000 UTC m=+36.015847753 (delta=104.825781ms)
	I1221 21:05:11.229734  163580 fix.go:200] guest clock delta is within tolerance: 104.825781ms
	I1221 21:05:11.229743  163580 start.go:83] releasing machines lock for "pause-471447", held for 6.536075232s
	I1221 21:05:11.233074  163580 main.go:144] libmachine: domain pause-471447 has defined MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:11.233572  163580 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3a:f3:44", ip: ""} in network mk-pause-471447: {Iface:virbr6 ExpiryTime:2025-12-21 22:03:31 +0000 UTC Type:0 Mac:52:54:00:3a:f3:44 Iaid: IPaddr:192.168.94.123 Prefix:24 Hostname:pause-471447 Clientid:01:52:54:00:3a:f3:44}
	I1221 21:05:11.233617  163580 main.go:144] libmachine: domain pause-471447 has defined IP address 192.168.94.123 and MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:11.234632  163580 ssh_runner.go:195] Run: cat /version.json
	I1221 21:05:11.234798  163580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 21:05:11.238458  163580 main.go:144] libmachine: domain pause-471447 has defined MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:11.238731  163580 main.go:144] libmachine: domain pause-471447 has defined MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:11.239000  163580 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3a:f3:44", ip: ""} in network mk-pause-471447: {Iface:virbr6 ExpiryTime:2025-12-21 22:03:31 +0000 UTC Type:0 Mac:52:54:00:3a:f3:44 Iaid: IPaddr:192.168.94.123 Prefix:24 Hostname:pause-471447 Clientid:01:52:54:00:3a:f3:44}
	I1221 21:05:11.239037  163580 main.go:144] libmachine: domain pause-471447 has defined IP address 192.168.94.123 and MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:11.239243  163580 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3a:f3:44", ip: ""} in network mk-pause-471447: {Iface:virbr6 ExpiryTime:2025-12-21 22:03:31 +0000 UTC Type:0 Mac:52:54:00:3a:f3:44 Iaid: IPaddr:192.168.94.123 Prefix:24 Hostname:pause-471447 Clientid:01:52:54:00:3a:f3:44}
	I1221 21:05:11.239273  163580 main.go:144] libmachine: domain pause-471447 has defined IP address 192.168.94.123 and MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:11.239268  163580 sshutil.go:53] new ssh client: &{IP:192.168.94.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/pause-471447/id_rsa Username:docker}
	I1221 21:05:11.239531  163580 sshutil.go:53] new ssh client: &{IP:192.168.94.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/pause-471447/id_rsa Username:docker}
	I1221 21:05:11.319212  163580 ssh_runner.go:195] Run: systemctl --version
	I1221 21:05:11.345091  163580 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 21:05:11.504845  163580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1221 21:05:11.513855  163580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1221 21:05:11.513943  163580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 21:05:11.529635  163580 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1221 21:05:11.529680  163580 start.go:496] detecting cgroup driver to use...
	I1221 21:05:11.529766  163580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 21:05:11.558304  163580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 21:05:11.578459  163580 docker.go:218] disabling cri-docker service (if available) ...
	I1221 21:05:11.578570  163580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 21:05:11.603102  163580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 21:05:11.624569  163580 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 21:05:11.880393  163580 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 21:05:12.082072  163580 docker.go:234] disabling docker service ...
	I1221 21:05:12.082147  163580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 21:05:12.113598  163580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 21:05:12.130368  163580 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 21:05:12.369655  163580 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 21:05:12.554942  163580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 21:05:12.573349  163580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 21:05:12.601873  163580 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1221 21:05:12.601982  163580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 21:05:12.616804  163580 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1221 21:05:12.616882  163580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 21:05:12.630622  163580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 21:05:12.644474  163580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 21:05:12.658660  163580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 21:05:12.672225  163580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 21:05:12.690230  163580 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 21:05:12.705747  163580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 21:05:12.725466  163580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 21:05:12.741121  163580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 21:05:12.753964  163580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 21:05:12.957716  163580 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 21:05:13.478967  163580 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 21:05:13.479062  163580 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 21:05:13.485412  163580 start.go:564] Will wait 60s for crictl version
	I1221 21:05:13.485480  163580 ssh_runner.go:195] Run: which crictl
	I1221 21:05:13.491345  163580 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1221 21:05:13.532677  163580 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1221 21:05:13.532756  163580 ssh_runner.go:195] Run: crio --version
	I1221 21:05:13.575189  163580 ssh_runner.go:195] Run: crio --version
	I1221 21:05:13.610984  163580 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1221 21:05:13.615594  163580 main.go:144] libmachine: domain pause-471447 has defined MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:13.616094  163580 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3a:f3:44", ip: ""} in network mk-pause-471447: {Iface:virbr6 ExpiryTime:2025-12-21 22:03:31 +0000 UTC Type:0 Mac:52:54:00:3a:f3:44 Iaid: IPaddr:192.168.94.123 Prefix:24 Hostname:pause-471447 Clientid:01:52:54:00:3a:f3:44}
	I1221 21:05:13.616129  163580 main.go:144] libmachine: domain pause-471447 has defined IP address 192.168.94.123 and MAC address 52:54:00:3a:f3:44 in network mk-pause-471447
	I1221 21:05:13.616417  163580 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1221 21:05:13.621593  163580 kubeadm.go:884] updating cluster {Name:pause-471447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3
ClusterName:pause-471447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.123 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1221 21:05:13.621731  163580 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 21:05:13.621775  163580 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 21:05:13.665428  163580 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 21:05:13.665457  163580 crio.go:433] Images already preloaded, skipping extraction
	I1221 21:05:13.665532  163580 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 21:05:13.709851  163580 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 21:05:13.709888  163580 cache_images.go:86] Images are preloaded, skipping loading
	I1221 21:05:13.709898  163580 kubeadm.go:935] updating node { 192.168.94.123 8443 v1.34.3 crio true true} ...
	I1221 21:05:13.710022  163580 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-471447 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:pause-471447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1221 21:05:13.710122  163580 ssh_runner.go:195] Run: crio config
	I1221 21:05:13.772821  163580 cni.go:84] Creating CNI manager for ""
	I1221 21:05:13.772853  163580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1221 21:05:13.772874  163580 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1221 21:05:13.772904  163580 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.123 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-471447 NodeName:pause-471447 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 21:05:13.773096  163580 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-471447"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.123"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.123"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 21:05:13.773200  163580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1221 21:05:13.787758  163580 binaries.go:51] Found k8s binaries, skipping transfer
	I1221 21:05:13.787831  163580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 21:05:13.804305  163580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1221 21:05:13.828052  163580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1221 21:05:13.857164  163580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1221 21:05:13.883907  163580 ssh_runner.go:195] Run: grep 192.168.94.123	control-plane.minikube.internal$ /etc/hosts
	I1221 21:05:13.888930  163580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 21:05:14.112149  163580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 21:05:14.142698  163580 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/pause-471447 for IP: 192.168.94.123
	I1221 21:05:14.142733  163580 certs.go:195] generating shared ca certs ...
	I1221 21:05:14.142755  163580 certs.go:227] acquiring lock for ca certs: {Name:mkda19a66cdf101dd9d66a3219f3492b9fb00ea9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 21:05:14.142996  163580 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-122429/.minikube/ca.key
	I1221 21:05:14.143075  163580 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-122429/.minikube/proxy-client-ca.key
	I1221 21:05:14.143093  163580 certs.go:257] generating profile certs ...
	I1221 21:05:14.143270  163580 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/pause-471447/client.key
	I1221 21:05:14.143368  163580 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/pause-471447/apiserver.key.c6b30b36
	I1221 21:05:14.143445  163580 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/pause-471447/proxy-client.key
	I1221 21:05:14.143641  163580 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/126345.pem (1338 bytes)
	W1221 21:05:14.143699  163580 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-122429/.minikube/certs/126345_empty.pem, impossibly tiny 0 bytes
	I1221 21:05:14.143717  163580 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 21:05:14.143763  163580 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem (1082 bytes)
	I1221 21:05:14.143806  163580 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/cert.pem (1123 bytes)
	I1221 21:05:14.143843  163580 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/key.pem (1679 bytes)
	I1221 21:05:14.143925  163580 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-122429/.minikube/files/etc/ssl/certs/1263452.pem (1708 bytes)
	I1221 21:05:14.144894  163580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 21:05:14.193558  163580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1221 21:05:14.240311  163580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 21:05:14.320373  163580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1221 21:05:14.404852  163580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/pause-471447/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1221 21:05:14.492084  163580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/pause-471447/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1221 21:05:14.615833  163580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/pause-471447/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 21:05:14.714987  163580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/pause-471447/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1221 21:05:14.843114  163580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/certs/126345.pem --> /usr/share/ca-certificates/126345.pem (1338 bytes)
	I1221 21:05:14.950685  163580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/files/etc/ssl/certs/1263452.pem --> /usr/share/ca-certificates/1263452.pem (1708 bytes)
	I1221 21:05:15.063041  163580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 21:05:15.166881  163580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1221 21:05:15.226041  163580 ssh_runner.go:195] Run: openssl version
	I1221 21:05:15.243314  163580 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1221 21:05:15.274599  163580 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1221 21:05:15.385293  163580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 21:05:15.402325  163580 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I1221 21:05:15.402426  163580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 21:05:15.454026  163580 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1221 21:05:15.503917  163580 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/126345.pem
	I1221 21:05:15.543982  163580 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/126345.pem /etc/ssl/certs/126345.pem
	I1221 21:05:15.575929  163580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126345.pem
	I1221 21:05:15.589572  163580 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 21 20:06 /usr/share/ca-certificates/126345.pem
	I1221 21:05:15.589653  163580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126345.pem
	I1221 21:05:15.613045  163580 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1221 21:05:15.657989  163580 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1263452.pem
	I1221 21:05:15.688247  163580 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1263452.pem /etc/ssl/certs/1263452.pem
	I1221 21:05:15.726077  163580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1263452.pem
	I1221 21:05:15.743962  163580 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 21 20:06 /usr/share/ca-certificates/1263452.pem
	I1221 21:05:15.744064  163580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1263452.pem
	I1221 21:05:15.769916  163580 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1221 21:05:15.832984  163580 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1221 21:05:15.852844  163580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1221 21:05:15.881086  163580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1221 21:05:15.905163  163580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1221 21:05:15.921000  163580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1221 21:05:15.940432  163580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1221 21:05:15.954961  163580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1221 21:05:15.975535  163580 kubeadm.go:401] StartCluster: {Name:pause-471447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 Cl
usterName:pause-471447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.123 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 21:05:15.975703  163580 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 21:05:15.975818  163580 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 21:05:16.045564  163580 cri.go:96] found id: "93325ef162c038b4a11c0bda41762a5caaaad854c9249abeb4364083c5410000"
	I1221 21:05:16.045597  163580 cri.go:96] found id: "bafe84bead083c74c92a835751b6b8106dfba6e16ae4608d0a10f6810545a603"
	I1221 21:05:16.045604  163580 cri.go:96] found id: "d354cf813045d0f0a1ebe95d2de31a795d6393130b097c2e6677aff705ff6fb0"
	I1221 21:05:16.045608  163580 cri.go:96] found id: "54f5cbc0350f6be6538794d454563d9d2e4dd4797d5a2ccec8d8bbc760f105b4"
	I1221 21:05:16.045614  163580 cri.go:96] found id: "f74d0e9192c754ede5d957fc537828b6c07fb9165277516435bc22722ee06dd6"
	I1221 21:05:16.045619  163580 cri.go:96] found id: "1b7e7ebf7d263f3f3630350b9e77caa2d7ef837752fcd7c507ef35d28b2baf82"
	I1221 21:05:16.045624  163580 cri.go:96] found id: "92e0b89e0700d665a861242081e70b2de9f3002559a2735636710ee64d8b4165"
	I1221 21:05:16.045628  163580 cri.go:96] found id: "4f2e622b0d45e0a9f0298eb28784b85a5a071b9c6442842cc4a2ab9f573bb45e"
	I1221 21:05:16.045633  163580 cri.go:96] found id: "0f325222fa16a95f806fac48454b1d88df4e2fa051d189e4b45a7c9862640beb"
	I1221 21:05:16.045647  163580 cri.go:96] found id: "8f6155d5f205b091c7c687ff38a225be4508d7df76566da398a0c27c72f29580"
	I1221 21:05:16.045653  163580 cri.go:96] found id: ""
	I1221 21:05:16.045710  163580 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-471447 -n pause-471447
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-471447 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-471447 logs -n 25: (1.522719159s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────────────
──┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────────────
──┤
	│ ssh     │ -p cilium-340687 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ cilium-340687             │ jenkins │ v1.37.0 │ 21 Dec 25 21:01 UTC │                     │
	│ ssh     │ -p cilium-340687 sudo crio config                                                                                                                                                                                                           │ cilium-340687             │ jenkins │ v1.37.0 │ 21 Dec 25 21:01 UTC │                     │
	│ delete  │ -p cilium-340687                                                                                                                                                                                                                            │ cilium-340687             │ jenkins │ v1.37.0 │ 21 Dec 25 21:01 UTC │ 21 Dec 25 21:01 UTC │
	│ start   │ -p guest-667849 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                                                                                                     │ guest-667849              │ jenkins │ v1.37.0 │ 21 Dec 25 21:01 UTC │ 21 Dec 25 21:02 UTC │
	│ delete  │ -p force-systemd-env-764266                                                                                                                                                                                                                 │ force-systemd-env-764266  │ jenkins │ v1.37.0 │ 21 Dec 25 21:02 UTC │ 21 Dec 25 21:02 UTC │
	│ start   │ -p cert-expiration-514100 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                                        │ cert-expiration-514100    │ jenkins │ v1.37.0 │ 21 Dec 25 21:02 UTC │ 21 Dec 25 21:02 UTC │
	│ start   │ -p force-systemd-flag-048347 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                                   │ force-systemd-flag-048347 │ jenkins │ v1.37.0 │ 21 Dec 25 21:02 UTC │ 21 Dec 25 21:03 UTC │
	│ image   │ test-preload-759510 image pull public.ecr.aws/docker/library/busybox:latest                                                                                                                                                                 │ test-preload-759510       │ jenkins │ v1.37.0 │ 21 Dec 25 21:02 UTC │ 21 Dec 25 21:02 UTC │
	│ stop    │ -p test-preload-759510                                                                                                                                                                                                                      │ test-preload-759510       │ jenkins │ v1.37.0 │ 21 Dec 25 21:02 UTC │ 21 Dec 25 21:02 UTC │
	│ start   │ -p test-preload-759510 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                                                                                                            │ test-preload-759510       │ jenkins │ v1.37.0 │ 21 Dec 25 21:02 UTC │ 21 Dec 25 21:03 UTC │
	│ ssh     │ force-systemd-flag-048347 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                        │ force-systemd-flag-048347 │ jenkins │ v1.37.0 │ 21 Dec 25 21:03 UTC │ 21 Dec 25 21:03 UTC │
	│ delete  │ -p force-systemd-flag-048347                                                                                                                                                                                                                │ force-systemd-flag-048347 │ jenkins │ v1.37.0 │ 21 Dec 25 21:03 UTC │ 21 Dec 25 21:03 UTC │
	│ start   │ -p pause-471447 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                                     │ pause-471447              │ jenkins │ v1.37.0 │ 21 Dec 25 21:03 UTC │ 21 Dec 25 21:04 UTC │
	│ image   │ test-preload-759510 image list                                                                                                                                                                                                              │ test-preload-759510       │ jenkins │ v1.37.0 │ 21 Dec 25 21:03 UTC │ 21 Dec 25 21:03 UTC │
	│ delete  │ -p test-preload-759510                                                                                                                                                                                                                      │ test-preload-759510       │ jenkins │ v1.37.0 │ 21 Dec 25 21:03 UTC │ 21 Dec 25 21:03 UTC │
	│ start   │ -p cert-options-764127 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio                     │ cert-options-764127       │ jenkins │ v1.37.0 │ 21 Dec 25 21:03 UTC │ 21 Dec 25 21:04 UTC │
	│ ssh     │ cert-options-764127 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                 │ cert-options-764127       │ jenkins │ v1.37.0 │ 21 Dec 25 21:04 UTC │ 21 Dec 25 21:04 UTC │
	│ ssh     │ -p cert-options-764127 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                               │ cert-options-764127       │ jenkins │ v1.37.0 │ 21 Dec 25 21:04 UTC │ 21 Dec 25 21:04 UTC │
	│ delete  │ -p cert-options-764127                                                                                                                                                                                                                      │ cert-options-764127       │ jenkins │ v1.37.0 │ 21 Dec 25 21:04 UTC │ 21 Dec 25 21:04 UTC │
	│ start   │ -p old-k8s-version-458928 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-458928    │ jenkins │ v1.37.0 │ 21 Dec 25 21:04 UTC │ 21 Dec 25 21:05 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-787082 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                 │ running-upgrade-787082    │ jenkins │ v1.37.0 │ 21 Dec 25 21:04 UTC │                     │
	│ delete  │ -p running-upgrade-787082                                                                                                                                                                                                                   │ running-upgrade-787082    │ jenkins │ v1.37.0 │ 21 Dec 25 21:04 UTC │ 21 Dec 25 21:04 UTC │
	│ start   │ -p no-preload-419917 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-419917         │ jenkins │ v1.37.0 │ 21 Dec 25 21:04 UTC │                     │
	│ start   │ -p pause-471447 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                                              │ pause-471447              │ jenkins │ v1.37.0 │ 21 Dec 25 21:04 UTC │ 21 Dec 25 21:05 UTC │
	│ start   │ -p cert-expiration-514100 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                                                                                                     │ cert-expiration-514100    │ jenkins │ v1.37.0 │ 21 Dec 25 21:05 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────────────
──┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 21:05:49
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 21:05:49.819786  164091 out.go:360] Setting OutFile to fd 1 ...
	I1221 21:05:49.819873  164091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 21:05:49.819876  164091 out.go:374] Setting ErrFile to fd 2...
	I1221 21:05:49.819879  164091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 21:05:49.820065  164091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
	I1221 21:05:49.820576  164091 out.go:368] Setting JSON to false
	I1221 21:05:49.821503  164091 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":17300,"bootTime":1766333850,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 21:05:49.821561  164091 start.go:143] virtualization: kvm guest
	I1221 21:05:49.823745  164091 out.go:179] * [cert-expiration-514100] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 21:05:49.825033  164091 notify.go:221] Checking for updates...
	I1221 21:05:49.825063  164091 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 21:05:49.826418  164091 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 21:05:49.827836  164091 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	I1221 21:05:49.829071  164091 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	I1221 21:05:49.830383  164091 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 21:05:49.831621  164091 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 21:05:49.833341  164091 config.go:182] Loaded profile config "cert-expiration-514100": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 21:05:49.834034  164091 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 21:05:49.871631  164091 out.go:179] * Using the kvm2 driver based on existing profile
	I1221 21:05:49.872952  164091 start.go:309] selected driver: kvm2
	I1221 21:05:49.872975  164091 start.go:928] validating driver "kvm2" against &{Name:cert-expiration-514100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.34.3 ClusterName:cert-expiration-514100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.159 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 21:05:49.873076  164091 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 21:05:49.874154  164091 cni.go:84] Creating CNI manager for ""
	I1221 21:05:49.874208  164091 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1221 21:05:49.874239  164091 start.go:353] cluster config:
	{Name:cert-expiration-514100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:cert-expiration-514100 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.159 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 21:05:49.874324  164091 iso.go:125] acquiring lock: {Name:mk32aed4917b82431a8f5160a35db6118385a2c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 21:05:49.876080  164091 out.go:179] * Starting "cert-expiration-514100" primary control-plane node in "cert-expiration-514100" cluster
	I1221 21:05:49.877351  164091 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 21:05:49.877377  164091 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-122429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1221 21:05:49.877391  164091 cache.go:65] Caching tarball of preloaded images
	I1221 21:05:49.877510  164091 preload.go:251] Found /home/jenkins/minikube-integration/22179-122429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 21:05:49.877517  164091 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1221 21:05:49.877594  164091 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/cert-expiration-514100/config.json ...
	I1221 21:05:49.877797  164091 start.go:360] acquireMachinesLock for cert-expiration-514100: {Name:mkd449b545e9165e82ce02652c0c22eb5894063b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1221 21:05:49.877840  164091 start.go:364] duration metric: took 31.049µs to acquireMachinesLock for "cert-expiration-514100"
	I1221 21:05:49.877851  164091 start.go:96] Skipping create...Using existing machine configuration
	I1221 21:05:49.877855  164091 fix.go:54] fixHost starting: 
	I1221 21:05:49.879818  164091 fix.go:112] recreateIfNeeded on cert-expiration-514100: state=Running err=<nil>
	W1221 21:05:49.879839  164091 fix.go:138] unexpected machine state, will restart: <nil>
	W1221 21:05:46.306982  163342 pod_ready.go:104] pod "coredns-5dd5756b68-xp8fg" is not "Ready", error: <nil>
	W1221 21:05:48.806203  163342 pod_ready.go:104] pod "coredns-5dd5756b68-xp8fg" is not "Ready", error: <nil>
	I1221 21:05:45.319836  163580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 21:05:45.338375  163580 node_ready.go:35] waiting up to 6m0s for node "pause-471447" to be "Ready" ...
	I1221 21:05:45.343334  163580 node_ready.go:49] node "pause-471447" is "Ready"
	I1221 21:05:45.343365  163580 node_ready.go:38] duration metric: took 4.936763ms for node "pause-471447" to be "Ready" ...
	I1221 21:05:45.343379  163580 api_server.go:52] waiting for apiserver process to appear ...
	I1221 21:05:45.343429  163580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 21:05:45.364211  163580 api_server.go:72] duration metric: took 260.373622ms to wait for apiserver process to appear ...
	I1221 21:05:45.364242  163580 api_server.go:88] waiting for apiserver healthz status ...
	I1221 21:05:45.364271  163580 api_server.go:253] Checking apiserver healthz at https://192.168.94.123:8443/healthz ...
	I1221 21:05:45.370604  163580 api_server.go:279] https://192.168.94.123:8443/healthz returned 200:
	ok
	I1221 21:05:45.372216  163580 api_server.go:141] control plane version: v1.34.3
	I1221 21:05:45.372246  163580 api_server.go:131] duration metric: took 7.995465ms to wait for apiserver health ...
	I1221 21:05:45.372277  163580 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 21:05:45.376527  163580 system_pods.go:59] 6 kube-system pods found
	I1221 21:05:45.376561  163580 system_pods.go:61] "coredns-66bc5c9577-4fcrq" [9a867251-b10f-44e7-ada2-057f1bb6273e] Running
	I1221 21:05:45.376580  163580 system_pods.go:61] "etcd-pause-471447" [a9e0adb8-8533-49ab-a3f3-e1d7b67590a3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 21:05:45.376590  163580 system_pods.go:61] "kube-apiserver-pause-471447" [676e5648-fd2a-43a8-833b-72a4e83a298a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 21:05:45.376605  163580 system_pods.go:61] "kube-controller-manager-pause-471447" [369471f4-ef0b-4855-b109-4ac8fede00e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 21:05:45.376610  163580 system_pods.go:61] "kube-proxy-76nfp" [b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6] Running
	I1221 21:05:45.376619  163580 system_pods.go:61] "kube-scheduler-pause-471447" [1e76d4a2-86f6-4eaf-b970-bd879f047ec3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 21:05:45.376632  163580 system_pods.go:74] duration metric: took 4.344471ms to wait for pod list to return data ...
	I1221 21:05:45.376647  163580 default_sa.go:34] waiting for default service account to be created ...
	I1221 21:05:45.380135  163580 default_sa.go:45] found service account: "default"
	I1221 21:05:45.380169  163580 default_sa.go:55] duration metric: took 3.510695ms for default service account to be created ...
	I1221 21:05:45.380178  163580 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 21:05:45.382916  163580 system_pods.go:86] 6 kube-system pods found
	I1221 21:05:45.382942  163580 system_pods.go:89] "coredns-66bc5c9577-4fcrq" [9a867251-b10f-44e7-ada2-057f1bb6273e] Running
	I1221 21:05:45.382951  163580 system_pods.go:89] "etcd-pause-471447" [a9e0adb8-8533-49ab-a3f3-e1d7b67590a3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 21:05:45.382957  163580 system_pods.go:89] "kube-apiserver-pause-471447" [676e5648-fd2a-43a8-833b-72a4e83a298a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 21:05:45.382964  163580 system_pods.go:89] "kube-controller-manager-pause-471447" [369471f4-ef0b-4855-b109-4ac8fede00e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 21:05:45.382969  163580 system_pods.go:89] "kube-proxy-76nfp" [b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6] Running
	I1221 21:05:45.382974  163580 system_pods.go:89] "kube-scheduler-pause-471447" [1e76d4a2-86f6-4eaf-b970-bd879f047ec3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 21:05:45.382981  163580 system_pods.go:126] duration metric: took 2.79789ms to wait for k8s-apps to be running ...
	I1221 21:05:45.382989  163580 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 21:05:45.383044  163580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 21:05:45.408505  163580 system_svc.go:56] duration metric: took 25.488943ms WaitForService to wait for kubelet
	I1221 21:05:45.408545  163580 kubeadm.go:587] duration metric: took 304.71232ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 21:05:45.408572  163580 node_conditions.go:102] verifying NodePressure condition ...
	I1221 21:05:45.412571  163580 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1221 21:05:45.412604  163580 node_conditions.go:123] node cpu capacity is 2
	I1221 21:05:45.412623  163580 node_conditions.go:105] duration metric: took 4.044046ms to run NodePressure ...
	I1221 21:05:45.412640  163580 start.go:242] waiting for startup goroutines ...
	I1221 21:05:45.412650  163580 start.go:247] waiting for cluster config update ...
	I1221 21:05:45.412667  163580 start.go:256] writing updated cluster config ...
	I1221 21:05:45.413130  163580 ssh_runner.go:195] Run: rm -f paused
	I1221 21:05:45.421201  163580 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 21:05:45.422288  163580 kapi.go:59] client config for pause-471447: &rest.Config{Host:"https://192.168.94.123:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-122429/.minikube/profiles/pause-471447/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-122429/.minikube/profiles/pause-471447/client.key", CAFile:"/home/jenkins/minikube-integration/22179-122429/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[
]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2867280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1221 21:05:45.425820  163580 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4fcrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:45.432608  163580 pod_ready.go:94] pod "coredns-66bc5c9577-4fcrq" is "Ready"
	I1221 21:05:45.432637  163580 pod_ready.go:86] duration metric: took 6.794741ms for pod "coredns-66bc5c9577-4fcrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:45.439893  163580 pod_ready.go:83] waiting for pod "etcd-pause-471447" in "kube-system" namespace to be "Ready" or be gone ...
	W1221 21:05:47.446314  163580 pod_ready.go:104] pod "etcd-pause-471447" is not "Ready", error: <nil>
	I1221 21:05:47.946878  163580 pod_ready.go:94] pod "etcd-pause-471447" is "Ready"
	I1221 21:05:47.946906  163580 pod_ready.go:86] duration metric: took 2.506984997s for pod "etcd-pause-471447" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:47.949427  163580 pod_ready.go:83] waiting for pod "kube-apiserver-pause-471447" in "kube-system" namespace to be "Ready" or be gone ...
	W1221 21:05:49.955512  163580 pod_ready.go:104] pod "kube-apiserver-pause-471447" is not "Ready", error: <nil>
	W1221 21:05:50.555382  163553 pod_ready.go:104] pod "coredns-7d764666f9-ccdcv" is not "Ready", error: <nil>
	W1221 21:05:53.052849  163553 pod_ready.go:104] pod "coredns-7d764666f9-ccdcv" is not "Ready", error: <nil>
	I1221 21:05:49.881523  164091 out.go:252] * Updating the running kvm2 "cert-expiration-514100" VM ...
	I1221 21:05:49.881543  164091 machine.go:94] provisionDockerMachine start ...
	I1221 21:05:49.884035  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:49.884529  164091 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:31:ed:0b", ip: ""} in network mk-cert-expiration-514100: {Iface:virbr2 ExpiryTime:2025-12-21 22:02:27 +0000 UTC Type:0 Mac:52:54:00:31:ed:0b Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:cert-expiration-514100 Clientid:01:52:54:00:31:ed:0b}
	I1221 21:05:49.884545  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined IP address 192.168.50.159 and MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:49.884714  164091 main.go:144] libmachine: Using SSH client type: native
	I1221 21:05:49.884918  164091 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I1221 21:05:49.884923  164091 main.go:144] libmachine: About to run SSH command:
	hostname
	I1221 21:05:49.996251  164091 main.go:144] libmachine: SSH cmd err, output: <nil>: cert-expiration-514100
	
	I1221 21:05:49.996275  164091 buildroot.go:166] provisioning hostname "cert-expiration-514100"
	I1221 21:05:49.999972  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:50.000463  164091 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:31:ed:0b", ip: ""} in network mk-cert-expiration-514100: {Iface:virbr2 ExpiryTime:2025-12-21 22:02:27 +0000 UTC Type:0 Mac:52:54:00:31:ed:0b Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:cert-expiration-514100 Clientid:01:52:54:00:31:ed:0b}
	I1221 21:05:50.000510  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined IP address 192.168.50.159 and MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:50.000707  164091 main.go:144] libmachine: Using SSH client type: native
	I1221 21:05:50.000991  164091 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I1221 21:05:50.001003  164091 main.go:144] libmachine: About to run SSH command:
	sudo hostname cert-expiration-514100 && echo "cert-expiration-514100" | sudo tee /etc/hostname
	I1221 21:05:50.129769  164091 main.go:144] libmachine: SSH cmd err, output: <nil>: cert-expiration-514100
	
	I1221 21:05:50.133241  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:50.133734  164091 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:31:ed:0b", ip: ""} in network mk-cert-expiration-514100: {Iface:virbr2 ExpiryTime:2025-12-21 22:02:27 +0000 UTC Type:0 Mac:52:54:00:31:ed:0b Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:cert-expiration-514100 Clientid:01:52:54:00:31:ed:0b}
	I1221 21:05:50.133762  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined IP address 192.168.50.159 and MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:50.133926  164091 main.go:144] libmachine: Using SSH client type: native
	I1221 21:05:50.134123  164091 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I1221 21:05:50.134132  164091 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-514100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-514100/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-514100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 21:05:50.239330  164091 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 21:05:50.239351  164091 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22179-122429/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-122429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-122429/.minikube}
	I1221 21:05:50.239370  164091 buildroot.go:174] setting up certificates
	I1221 21:05:50.239379  164091 provision.go:84] configureAuth start
	I1221 21:05:50.242428  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:50.242772  164091 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:31:ed:0b", ip: ""} in network mk-cert-expiration-514100: {Iface:virbr2 ExpiryTime:2025-12-21 22:02:27 +0000 UTC Type:0 Mac:52:54:00:31:ed:0b Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:cert-expiration-514100 Clientid:01:52:54:00:31:ed:0b}
	I1221 21:05:50.242789  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined IP address 192.168.50.159 and MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:50.245046  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:50.245414  164091 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:31:ed:0b", ip: ""} in network mk-cert-expiration-514100: {Iface:virbr2 ExpiryTime:2025-12-21 22:02:27 +0000 UTC Type:0 Mac:52:54:00:31:ed:0b Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:cert-expiration-514100 Clientid:01:52:54:00:31:ed:0b}
	I1221 21:05:50.245431  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined IP address 192.168.50.159 and MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:50.245558  164091 provision.go:143] copyHostCerts
	I1221 21:05:50.245618  164091 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-122429/.minikube/ca.pem, removing ...
	I1221 21:05:50.245629  164091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-122429/.minikube/ca.pem
	I1221 21:05:50.245702  164091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-122429/.minikube/ca.pem (1082 bytes)
	I1221 21:05:50.245835  164091 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-122429/.minikube/cert.pem, removing ...
	I1221 21:05:50.245839  164091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-122429/.minikube/cert.pem
	I1221 21:05:50.245867  164091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-122429/.minikube/cert.pem (1123 bytes)
	I1221 21:05:50.245916  164091 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-122429/.minikube/key.pem, removing ...
	I1221 21:05:50.245919  164091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-122429/.minikube/key.pem
	I1221 21:05:50.245939  164091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-122429/.minikube/key.pem (1679 bytes)
	I1221 21:05:50.245993  164091 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-122429/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-514100 san=[127.0.0.1 192.168.50.159 cert-expiration-514100 localhost minikube]
	I1221 21:05:50.313899  164091 provision.go:177] copyRemoteCerts
	I1221 21:05:50.313981  164091 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 21:05:50.317055  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:50.317516  164091 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:31:ed:0b", ip: ""} in network mk-cert-expiration-514100: {Iface:virbr2 ExpiryTime:2025-12-21 22:02:27 +0000 UTC Type:0 Mac:52:54:00:31:ed:0b Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:cert-expiration-514100 Clientid:01:52:54:00:31:ed:0b}
	I1221 21:05:50.317539  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined IP address 192.168.50.159 and MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:50.317694  164091 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/cert-expiration-514100/id_rsa Username:docker}
	I1221 21:05:50.401919  164091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1221 21:05:50.439534  164091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1221 21:05:50.476843  164091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1221 21:05:50.516573  164091 provision.go:87] duration metric: took 277.178435ms to configureAuth
	I1221 21:05:50.516600  164091 buildroot.go:189] setting minikube options for container-runtime
	I1221 21:05:50.516857  164091 config.go:182] Loaded profile config "cert-expiration-514100": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 21:05:50.520307  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:50.520752  164091 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:31:ed:0b", ip: ""} in network mk-cert-expiration-514100: {Iface:virbr2 ExpiryTime:2025-12-21 22:02:27 +0000 UTC Type:0 Mac:52:54:00:31:ed:0b Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:cert-expiration-514100 Clientid:01:52:54:00:31:ed:0b}
	I1221 21:05:50.520767  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined IP address 192.168.50.159 and MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:50.520960  164091 main.go:144] libmachine: Using SSH client type: native
	I1221 21:05:50.521161  164091 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I1221 21:05:50.521168  164091 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1221 21:05:50.808612  163342 pod_ready.go:104] pod "coredns-5dd5756b68-xp8fg" is not "Ready", error: <nil>
	W1221 21:05:53.307693  163342 pod_ready.go:104] pod "coredns-5dd5756b68-xp8fg" is not "Ready", error: <nil>
	W1221 21:05:51.956729  163580 pod_ready.go:104] pod "kube-apiserver-pause-471447" is not "Ready", error: <nil>
	W1221 21:05:54.456455  163580 pod_ready.go:104] pod "kube-apiserver-pause-471447" is not "Ready", error: <nil>
	I1221 21:05:56.456127  163580 pod_ready.go:94] pod "kube-apiserver-pause-471447" is "Ready"
	I1221 21:05:56.456166  163580 pod_ready.go:86] duration metric: took 8.506718193s for pod "kube-apiserver-pause-471447" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:56.459636  163580 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-471447" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:56.466907  163580 pod_ready.go:94] pod "kube-controller-manager-pause-471447" is "Ready"
	I1221 21:05:56.466950  163580 pod_ready.go:86] duration metric: took 7.273093ms for pod "kube-controller-manager-pause-471447" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:56.470653  163580 pod_ready.go:83] waiting for pod "kube-proxy-76nfp" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:56.477292  163580 pod_ready.go:94] pod "kube-proxy-76nfp" is "Ready"
	I1221 21:05:56.477335  163580 pod_ready.go:86] duration metric: took 6.644759ms for pod "kube-proxy-76nfp" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:56.479545  163580 pod_ready.go:83] waiting for pod "kube-scheduler-pause-471447" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:56.654729  163580 pod_ready.go:94] pod "kube-scheduler-pause-471447" is "Ready"
	I1221 21:05:56.654767  163580 pod_ready.go:86] duration metric: took 175.190158ms for pod "kube-scheduler-pause-471447" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:56.654785  163580 pod_ready.go:40] duration metric: took 11.233547255s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 21:05:56.717504  163580 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1221 21:05:56.719611  163580 out.go:179] * Done! kubectl is now configured to use "pause-471447" cluster and "default" namespace by default
	I1221 21:05:55.307100  163342 pod_ready.go:94] pod "coredns-5dd5756b68-xp8fg" is "Ready"
	I1221 21:05:55.307138  163342 pod_ready.go:86] duration metric: took 27.507559094s for pod "coredns-5dd5756b68-xp8fg" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:55.311173  163342 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-458928" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:55.317617  163342 pod_ready.go:94] pod "etcd-old-k8s-version-458928" is "Ready"
	I1221 21:05:55.317648  163342 pod_ready.go:86] duration metric: took 6.448841ms for pod "etcd-old-k8s-version-458928" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:55.321218  163342 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-458928" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:55.333479  163342 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-458928" is "Ready"
	I1221 21:05:55.333532  163342 pod_ready.go:86] duration metric: took 12.285995ms for pod "kube-apiserver-old-k8s-version-458928" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:55.336721  163342 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-458928" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:55.503187  163342 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-458928" is "Ready"
	I1221 21:05:55.503219  163342 pod_ready.go:86] duration metric: took 166.467785ms for pod "kube-controller-manager-old-k8s-version-458928" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:55.707058  163342 pod_ready.go:83] waiting for pod "kube-proxy-6d8w8" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:56.104404  163342 pod_ready.go:94] pod "kube-proxy-6d8w8" is "Ready"
	I1221 21:05:56.104445  163342 pod_ready.go:86] duration metric: took 397.344523ms for pod "kube-proxy-6d8w8" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:56.305399  163342 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-458928" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:56.704616  163342 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-458928" is "Ready"
	I1221 21:05:56.704655  163342 pod_ready.go:86] duration metric: took 399.224899ms for pod "kube-scheduler-old-k8s-version-458928" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:56.704674  163342 pod_ready.go:40] duration metric: took 38.915597646s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 21:05:56.768132  163342 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1221 21:05:56.770304  163342 out.go:203] 
	W1221 21:05:56.771836  163342 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1221 21:05:56.773421  163342 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1221 21:05:56.775154  163342 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-458928" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.446999567Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766351157446964260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6be0027-09e9-4b45-a9d9-20ad5bcfe00c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.448661334Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24fc8239-a929-4c7d-a6e3-649fb2c7d45a name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.448905772Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24fc8239-a929-4c7d-a6e3-649fb2c7d45a name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.449911357Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6309a5049c66ec0af407317a4d7d6fe72a27a5fac495877a8d5c3c390bdd9aba,PodSandboxId:4cfdc2729aa10488f854e263f52bcfa12d34a337a2b88c0e8537d918b5a3aa58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766351140165246862,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ceefc332265c98cad91e49ee4ea553,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68d794a227631af73b57a6be3b7b3d1ecad90e9b5855855f7d4ce6432c3149a6,PodSandboxId:c24196a03cc2030e935a61b37742c373492eaad0288a23aef8a03b0c3074f573,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766351140168986935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90561bae391763380a8abda94df97fd4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fa30cec5ae5968b7ebe6b44147a15357ae41e2146d8f311d2b6e66657590231,PodSandboxId:799a0567b6a81f6d2df124907d71bc1b13a8ff4652b14ba39775f7b1aa0dfcaa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766351140156438675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-471447,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce57c2667109ff3779565d235787c36,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:756deeeab4bb7a3d6b9eef2125df3b7e54bb881ac34b916250873bad24af3d8a,PodSandboxId:56918331a3709800b1a3cd157c50936a8ad547d2a5bdfc14f71dd7d4e5029d2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766351136852567053,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1053229f473c15a0d5de19f4163b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6abb2a527255ca4a3945395fc1c553575631f1220fbbd717a3483a877bdea06,PodSandboxId:68d45ff2273c2f4ee29ad02fc7a91e53dd516be05cd3cd7d4e5cb0e287409515,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Sta
te:CONTAINER_RUNNING,CreatedAt:1766351115220273113,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76nfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:523955d2d2268a2b609203c9380984ff164947649220d7efa18b665d6195dbcf,PodSandboxId:60749c7bcea3bf072bc7098ce7b85e484b59d718ea3b2a5cb24098f7975aa2c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17663
51116231558123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4fcrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a867251-b10f-44e7-ada2-057f1bb6273e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bafe84bead083c74c92a835751b6b8106dfba6e16ae4608d0a10f6810545a603,PodSandboxId:56918331a3709800b1a3cd157c50936a8ad547d2a5bdfc
14f71dd7d4e5029d2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766351115107462823,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1053229f473c15a0d5de19f4163b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93325ef162c038b4a11c0bda41762a5caaaad854c9249abeb436
4083c5410000,PodSandboxId:4cfdc2729aa10488f854e263f52bcfa12d34a337a2b88c0e8537d918b5a3aa58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766351115172522223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ceefc332265c98cad91e49ee4ea553,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d354cf813045d0f0a1ebe95d2de31a795d6393130b097c2e6677aff705ff6fb0,PodSandboxId:799a0567b6a81f6d2df124907d71bc1b13a8ff4652b14ba39775f7b1aa0dfcaa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766351114946873908,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce57c2667109ff3779565d235787c36,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f5cbc0350f6be6538794d454563d9d2e4dd4797d5a2ccec8d8bbc760f105b4,PodSandboxId:c24196a03cc2030e935a61b37742c373492eaad0288a23aef8a03b0c3074f573,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766351114834656864,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90561bae391763380a8abda94df97fd4,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74d0e9192c754ede5d957fc537828b6c07fb9165277516435bc22722ee06dd6,PodSandboxId:73296b379ad7cb48c183b8af7c79381bb07f02f40043251dee7614d396261b3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766351038358762042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4fcrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a867251-b10f-44e7-ada2-057f1bb6273e,},Annotations:map[string]string{io.kubernetes
.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b7e7ebf7d263f3f3630350b9e77caa2d7ef837752fcd7c507ef35d28b2baf82,PodSandboxId:e0deaffec26ef529ec7d9ee5a18a1bea113d79b9d0ee350707edc4e529961ccd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e4948
0fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766351037542998480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76nfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=24fc8239-a929-4c7d-a6e3-649fb2c7d45a name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.509118251Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7d236da5-b80d-4727-8c2e-67dbe65ce310 name=/runtime.v1.RuntimeService/Version
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.509244804Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7d236da5-b80d-4727-8c2e-67dbe65ce310 name=/runtime.v1.RuntimeService/Version
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.513254508Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce21009a-e593-44ca-9482-c75df04400d9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.513979512Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766351157513939108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce21009a-e593-44ca-9482-c75df04400d9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.515396960Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eedf1ecd-483a-4302-9b3d-3f1cc1d19c58 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.515778098Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eedf1ecd-483a-4302-9b3d-3f1cc1d19c58 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.516632555Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6309a5049c66ec0af407317a4d7d6fe72a27a5fac495877a8d5c3c390bdd9aba,PodSandboxId:4cfdc2729aa10488f854e263f52bcfa12d34a337a2b88c0e8537d918b5a3aa58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766351140165246862,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ceefc332265c98cad91e49ee4ea553,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68d794a227631af73b57a6be3b7b3d1ecad90e9b5855855f7d4ce6432c3149a6,PodSandboxId:c24196a03cc2030e935a61b37742c373492eaad0288a23aef8a03b0c3074f573,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766351140168986935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90561bae391763380a8abda94df97fd4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fa30cec5ae5968b7ebe6b44147a15357ae41e2146d8f311d2b6e66657590231,PodSandboxId:799a0567b6a81f6d2df124907d71bc1b13a8ff4652b14ba39775f7b1aa0dfcaa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766351140156438675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-471447,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce57c2667109ff3779565d235787c36,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:756deeeab4bb7a3d6b9eef2125df3b7e54bb881ac34b916250873bad24af3d8a,PodSandboxId:56918331a3709800b1a3cd157c50936a8ad547d2a5bdfc14f71dd7d4e5029d2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766351136852567053,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1053229f473c15a0d5de19f4163b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6abb2a527255ca4a3945395fc1c553575631f1220fbbd717a3483a877bdea06,PodSandboxId:68d45ff2273c2f4ee29ad02fc7a91e53dd516be05cd3cd7d4e5cb0e287409515,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Sta
te:CONTAINER_RUNNING,CreatedAt:1766351115220273113,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76nfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:523955d2d2268a2b609203c9380984ff164947649220d7efa18b665d6195dbcf,PodSandboxId:60749c7bcea3bf072bc7098ce7b85e484b59d718ea3b2a5cb24098f7975aa2c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17663
51116231558123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4fcrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a867251-b10f-44e7-ada2-057f1bb6273e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bafe84bead083c74c92a835751b6b8106dfba6e16ae4608d0a10f6810545a603,PodSandboxId:56918331a3709800b1a3cd157c50936a8ad547d2a5bdfc
14f71dd7d4e5029d2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766351115107462823,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1053229f473c15a0d5de19f4163b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93325ef162c038b4a11c0bda41762a5caaaad854c9249abeb436
4083c5410000,PodSandboxId:4cfdc2729aa10488f854e263f52bcfa12d34a337a2b88c0e8537d918b5a3aa58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766351115172522223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ceefc332265c98cad91e49ee4ea553,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d354cf813045d0f0a1ebe95d2de31a795d6393130b097c2e6677aff705ff6fb0,PodSandboxId:799a0567b6a81f6d2df124907d71bc1b13a8ff4652b14ba39775f7b1aa0dfcaa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766351114946873908,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce57c2667109ff3779565d235787c36,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f5cbc0350f6be6538794d454563d9d2e4dd4797d5a2ccec8d8bbc760f105b4,PodSandboxId:c24196a03cc2030e935a61b37742c373492eaad0288a23aef8a03b0c3074f573,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766351114834656864,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90561bae391763380a8abda94df97fd4,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74d0e9192c754ede5d957fc537828b6c07fb9165277516435bc22722ee06dd6,PodSandboxId:73296b379ad7cb48c183b8af7c79381bb07f02f40043251dee7614d396261b3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766351038358762042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4fcrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a867251-b10f-44e7-ada2-057f1bb6273e,},Annotations:map[string]string{io.kubernetes
.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b7e7ebf7d263f3f3630350b9e77caa2d7ef837752fcd7c507ef35d28b2baf82,PodSandboxId:e0deaffec26ef529ec7d9ee5a18a1bea113d79b9d0ee350707edc4e529961ccd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e4948
0fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766351037542998480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76nfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eedf1ecd-483a-4302-9b3d-3f1cc1d19c58 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.573502781Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ec65f296-c3d9-4a74-8f2e-dc153c6fd6fc name=/runtime.v1.RuntimeService/Version
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.573631190Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ec65f296-c3d9-4a74-8f2e-dc153c6fd6fc name=/runtime.v1.RuntimeService/Version
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.575133200Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fceb89e0-7d26-4b67-8b5d-b230b265e5eb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.575647544Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766351157575617799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fceb89e0-7d26-4b67-8b5d-b230b265e5eb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.576980722Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=979a50e6-e7c0-4db8-898e-b0d00e8c7524 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.577399084Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=979a50e6-e7c0-4db8-898e-b0d00e8c7524 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.578249174Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6309a5049c66ec0af407317a4d7d6fe72a27a5fac495877a8d5c3c390bdd9aba,PodSandboxId:4cfdc2729aa10488f854e263f52bcfa12d34a337a2b88c0e8537d918b5a3aa58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766351140165246862,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ceefc332265c98cad91e49ee4ea553,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68d794a227631af73b57a6be3b7b3d1ecad90e9b5855855f7d4ce6432c3149a6,PodSandboxId:c24196a03cc2030e935a61b37742c373492eaad0288a23aef8a03b0c3074f573,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766351140168986935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90561bae391763380a8abda94df97fd4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fa30cec5ae5968b7ebe6b44147a15357ae41e2146d8f311d2b6e66657590231,PodSandboxId:799a0567b6a81f6d2df124907d71bc1b13a8ff4652b14ba39775f7b1aa0dfcaa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766351140156438675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-471447,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce57c2667109ff3779565d235787c36,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:756deeeab4bb7a3d6b9eef2125df3b7e54bb881ac34b916250873bad24af3d8a,PodSandboxId:56918331a3709800b1a3cd157c50936a8ad547d2a5bdfc14f71dd7d4e5029d2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766351136852567053,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1053229f473c15a0d5de19f4163b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6abb2a527255ca4a3945395fc1c553575631f1220fbbd717a3483a877bdea06,PodSandboxId:68d45ff2273c2f4ee29ad02fc7a91e53dd516be05cd3cd7d4e5cb0e287409515,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Sta
te:CONTAINER_RUNNING,CreatedAt:1766351115220273113,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76nfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:523955d2d2268a2b609203c9380984ff164947649220d7efa18b665d6195dbcf,PodSandboxId:60749c7bcea3bf072bc7098ce7b85e484b59d718ea3b2a5cb24098f7975aa2c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17663
51116231558123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4fcrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a867251-b10f-44e7-ada2-057f1bb6273e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bafe84bead083c74c92a835751b6b8106dfba6e16ae4608d0a10f6810545a603,PodSandboxId:56918331a3709800b1a3cd157c50936a8ad547d2a5bdfc
14f71dd7d4e5029d2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766351115107462823,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1053229f473c15a0d5de19f4163b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93325ef162c038b4a11c0bda41762a5caaaad854c9249abeb436
4083c5410000,PodSandboxId:4cfdc2729aa10488f854e263f52bcfa12d34a337a2b88c0e8537d918b5a3aa58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766351115172522223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ceefc332265c98cad91e49ee4ea553,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d354cf813045d0f0a1ebe95d2de31a795d6393130b097c2e6677aff705ff6fb0,PodSandboxId:799a0567b6a81f6d2df124907d71bc1b13a8ff4652b14ba39775f7b1aa0dfcaa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766351114946873908,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce57c2667109ff3779565d235787c36,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f5cbc0350f6be6538794d454563d9d2e4dd4797d5a2ccec8d8bbc760f105b4,PodSandboxId:c24196a03cc2030e935a61b37742c373492eaad0288a23aef8a03b0c3074f573,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766351114834656864,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90561bae391763380a8abda94df97fd4,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74d0e9192c754ede5d957fc537828b6c07fb9165277516435bc22722ee06dd6,PodSandboxId:73296b379ad7cb48c183b8af7c79381bb07f02f40043251dee7614d396261b3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766351038358762042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4fcrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a867251-b10f-44e7-ada2-057f1bb6273e,},Annotations:map[string]string{io.kubernetes
.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b7e7ebf7d263f3f3630350b9e77caa2d7ef837752fcd7c507ef35d28b2baf82,PodSandboxId:e0deaffec26ef529ec7d9ee5a18a1bea113d79b9d0ee350707edc4e529961ccd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e4948
0fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766351037542998480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76nfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=979a50e6-e7c0-4db8-898e-b0d00e8c7524 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.636127601Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6bcd23e1-5cb2-4d9b-8694-9cf2dc69313a name=/runtime.v1.RuntimeService/Version
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.636242298Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6bcd23e1-5cb2-4d9b-8694-9cf2dc69313a name=/runtime.v1.RuntimeService/Version
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.637953227Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=773a3156-4739-4154-b929-5a9c6c421022 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.638634638Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766351157638596274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=773a3156-4739-4154-b929-5a9c6c421022 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.640354620Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b1bfabaa-3052-48bd-893a-00e48d13ab64 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.640538685Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b1bfabaa-3052-48bd-893a-00e48d13ab64 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:05:57 pause-471447 crio[2803]: time="2025-12-21 21:05:57.641145216Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6309a5049c66ec0af407317a4d7d6fe72a27a5fac495877a8d5c3c390bdd9aba,PodSandboxId:4cfdc2729aa10488f854e263f52bcfa12d34a337a2b88c0e8537d918b5a3aa58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766351140165246862,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ceefc332265c98cad91e49ee4ea553,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68d794a227631af73b57a6be3b7b3d1ecad90e9b5855855f7d4ce6432c3149a6,PodSandboxId:c24196a03cc2030e935a61b37742c373492eaad0288a23aef8a03b0c3074f573,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766351140168986935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90561bae391763380a8abda94df97fd4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fa30cec5ae5968b7ebe6b44147a15357ae41e2146d8f311d2b6e66657590231,PodSandboxId:799a0567b6a81f6d2df124907d71bc1b13a8ff4652b14ba39775f7b1aa0dfcaa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766351140156438675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-471447,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce57c2667109ff3779565d235787c36,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:756deeeab4bb7a3d6b9eef2125df3b7e54bb881ac34b916250873bad24af3d8a,PodSandboxId:56918331a3709800b1a3cd157c50936a8ad547d2a5bdfc14f71dd7d4e5029d2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766351136852567053,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1053229f473c15a0d5de19f4163b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6abb2a527255ca4a3945395fc1c553575631f1220fbbd717a3483a877bdea06,PodSandboxId:68d45ff2273c2f4ee29ad02fc7a91e53dd516be05cd3cd7d4e5cb0e287409515,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Sta
te:CONTAINER_RUNNING,CreatedAt:1766351115220273113,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76nfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:523955d2d2268a2b609203c9380984ff164947649220d7efa18b665d6195dbcf,PodSandboxId:60749c7bcea3bf072bc7098ce7b85e484b59d718ea3b2a5cb24098f7975aa2c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17663
51116231558123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4fcrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a867251-b10f-44e7-ada2-057f1bb6273e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bafe84bead083c74c92a835751b6b8106dfba6e16ae4608d0a10f6810545a603,PodSandboxId:56918331a3709800b1a3cd157c50936a8ad547d2a5bdfc
14f71dd7d4e5029d2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766351115107462823,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1053229f473c15a0d5de19f4163b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93325ef162c038b4a11c0bda41762a5caaaad854c9249abeb436
4083c5410000,PodSandboxId:4cfdc2729aa10488f854e263f52bcfa12d34a337a2b88c0e8537d918b5a3aa58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766351115172522223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ceefc332265c98cad91e49ee4ea553,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d354cf813045d0f0a1ebe95d2de31a795d6393130b097c2e6677aff705ff6fb0,PodSandboxId:799a0567b6a81f6d2df124907d71bc1b13a8ff4652b14ba39775f7b1aa0dfcaa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766351114946873908,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce57c2667109ff3779565d235787c36,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f5cbc0350f6be6538794d454563d9d2e4dd4797d5a2ccec8d8bbc760f105b4,PodSandboxId:c24196a03cc2030e935a61b37742c373492eaad0288a23aef8a03b0c3074f573,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766351114834656864,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90561bae391763380a8abda94df97fd4,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74d0e9192c754ede5d957fc537828b6c07fb9165277516435bc22722ee06dd6,PodSandboxId:73296b379ad7cb48c183b8af7c79381bb07f02f40043251dee7614d396261b3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766351038358762042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4fcrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a867251-b10f-44e7-ada2-057f1bb6273e,},Annotations:map[string]string{io.kubernetes
.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b7e7ebf7d263f3f3630350b9e77caa2d7ef837752fcd7c507ef35d28b2baf82,PodSandboxId:e0deaffec26ef529ec7d9ee5a18a1bea113d79b9d0ee350707edc4e529961ccd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e4948
0fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766351037542998480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76nfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b1bfabaa-3052-48bd-893a-00e48d13ab64 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	68d794a227631       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c   17 seconds ago       Running             kube-apiserver            2                   c24196a03cc20       kube-apiserver-pause-471447            kube-system
	6309a5049c66e       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942   17 seconds ago       Running             kube-controller-manager   2                   4cfdc2729aa10       kube-controller-manager-pause-471447   kube-system
	4fa30cec5ae59       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78   17 seconds ago       Running             kube-scheduler            2                   799a0567b6a81       kube-scheduler-pause-471447            kube-system
	756deeeab4bb7       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   20 seconds ago       Running             etcd                      2                   56918331a3709       etcd-pause-471447                      kube-system
	523955d2d2268       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   41 seconds ago       Running             coredns                   1                   60749c7bcea3b       coredns-66bc5c9577-4fcrq               kube-system
	f6abb2a527255       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691   42 seconds ago       Running             kube-proxy                1                   68d45ff2273c2       kube-proxy-76nfp                       kube-system
	93325ef162c03       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942   42 seconds ago       Exited              kube-controller-manager   1                   4cfdc2729aa10       kube-controller-manager-pause-471447   kube-system
	bafe84bead083       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   42 seconds ago       Exited              etcd                      1                   56918331a3709       etcd-pause-471447                      kube-system
	d354cf813045d       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78   42 seconds ago       Exited              kube-scheduler            1                   799a0567b6a81       kube-scheduler-pause-471447            kube-system
	54f5cbc0350f6       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c   42 seconds ago       Exited              kube-apiserver            1                   c24196a03cc20       kube-apiserver-pause-471447            kube-system
	f74d0e9192c75       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   73296b379ad7c       coredns-66bc5c9577-4fcrq               kube-system
	1b7e7ebf7d263       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691   2 minutes ago        Exited              kube-proxy                0                   e0deaffec26ef       kube-proxy-76nfp                       kube-system
	
	
	==> coredns [523955d2d2268a2b609203c9380984ff164947649220d7efa18b665d6195dbcf] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48830 - 42101 "HINFO IN 1270418721480851838.2093793463464312729. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018515858s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:39092->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:39086->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:39100->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [f74d0e9192c754ede5d957fc537828b6c07fb9165277516435bc22722ee06dd6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	[INFO] Reloading complete
	[INFO] 127.0.0.1:40164 - 51601 "HINFO IN 7054236044647090558.315467932222168931. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.014999555s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-471447
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-471447
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=pause-471447
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T21_03_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 21:03:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-471447
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 21:05:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 21:05:43 +0000   Sun, 21 Dec 2025 21:03:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 21:05:43 +0000   Sun, 21 Dec 2025 21:03:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 21:05:43 +0000   Sun, 21 Dec 2025 21:03:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Dec 2025 21:05:43 +0000   Sun, 21 Dec 2025 21:03:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.123
	  Hostname:    pause-471447
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 8c6a8e44b9524a34ab568486e0b3afb8
	  System UUID:                8c6a8e44-b952-4a34-ab56-8486e0b3afb8
	  Boot ID:                    0b24680a-4dd3-4e1f-a7cc-21ab4492f382
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-4fcrq                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     2m
	  kube-system                 etcd-pause-471447                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m7s
	  kube-system                 kube-apiserver-pause-471447             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-controller-manager-pause-471447    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-proxy-76nfp                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-scheduler-pause-471447             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 119s               kube-proxy       
	  Normal  Starting                 12s                kube-proxy       
	  Normal  NodeHasSufficientPID     2m6s               kubelet          Node pause-471447 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m6s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m6s               kubelet          Node pause-471447 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s               kubelet          Node pause-471447 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m6s               kubelet          Starting kubelet.
	  Normal  NodeReady                2m5s               kubelet          Node pause-471447 status is now: NodeReady
	  Normal  RegisteredNode           2m1s               node-controller  Node pause-471447 event: Registered Node pause-471447 in Controller
	  Normal  Starting                 18s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18s (x8 over 18s)  kubelet          Node pause-471447 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s (x8 over 18s)  kubelet          Node pause-471447 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s (x7 over 18s)  kubelet          Node pause-471447 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11s                node-controller  Node pause-471447 event: Registered Node pause-471447 in Controller
	
	
	==> dmesg <==
	[Dec21 21:03] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001727] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.007430] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.739777] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.100459] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.122960] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.138611] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.489784] kauditd_printk_skb: 18 callbacks suppressed
	[Dec21 21:04] kauditd_printk_skb: 219 callbacks suppressed
	[ +24.515425] kauditd_printk_skb: 38 callbacks suppressed
	[Dec21 21:05] kauditd_printk_skb: 319 callbacks suppressed
	[  +0.322814] kauditd_printk_skb: 77 callbacks suppressed
	[  +6.692015] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [756deeeab4bb7a3d6b9eef2125df3b7e54bb881ac34b916250873bad24af3d8a] <==
	{"level":"warn","ts":"2025-12-21T21:05:42.026702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.051373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.085116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.094915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.116697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.130344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.148373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.151551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.167742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.178358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.190806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.208945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.223449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.237142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.247180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.263143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.276450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.293390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.309100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.327709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.338428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.366611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.381543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.405705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.507475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42576","server-name":"","error":"EOF"}
	
	
	==> etcd [bafe84bead083c74c92a835751b6b8106dfba6e16ae4608d0a10f6810545a603] <==
	{"level":"warn","ts":"2025-12-21T21:05:16.726026Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-12-21T21:05:16.739507Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-21T21:05:16.742877Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.123:2379"}
	{"level":"info","ts":"2025-12-21T21:05:16.744443Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-21T21:05:16.744511Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-471447","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.94.123:2380"],"advertise-client-urls":["https://192.168.94.123:2379"]}
	{"level":"info","ts":"2025-12-21T21:05:16.745712Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	2025/12/21 21:05:16 WARNING: [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: EOF"
	{"level":"error","ts":"2025-12-21T21:05:16.748791Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-21T21:05:16.748833Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-21T21:05:16.748847Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"eeb128165590df22","current-leader-member-id":"eeb128165590df22"}
	{"level":"info","ts":"2025-12-21T21:05:16.748898Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-21T21:05:16.748918Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	2025/12/21 21:05:16 WARNING: [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:47464->127.0.0.1:2379: read: connection reset by peer"
	{"level":"warn","ts":"2025-12-21T21:05:16.754750Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.94.123:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-21T21:05:16.754802Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.94.123:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-21T21:05:16.754861Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.94.123:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-21T21:05:16.781725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47470","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:47470: use of closed network connection"}
	{"level":"error","ts":"2025-12-21T21:05:16.786914Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-12-21T21:05:16.789873Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-21T21:05:16.789934Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-21T21:05:16.789947Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-21T21:05:16.863574Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.94.123:2380"}
	{"level":"error","ts":"2025-12-21T21:05:16.866517Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.94.123:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-21T21:05:16.866576Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.94.123:2380"}
	{"level":"info","ts":"2025-12-21T21:05:16.866585Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-471447","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.94.123:2380"],"advertise-client-urls":["https://192.168.94.123:2379"]}
	
	
	==> kernel <==
	 21:05:58 up 2 min,  0 users,  load average: 0.70, 0.35, 0.14
	Linux pause-471447 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Dec 20 21:36:01 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [54f5cbc0350f6be6538794d454563d9d2e4dd4797d5a2ccec8d8bbc760f105b4] <==
	W1221 21:05:17.015693       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:17.015782       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1221 21:05:17.015907       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1221 21:05:17.038662       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1221 21:05:17.044361       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1221 21:05:17.044461       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1221 21:05:17.044835       1 instance.go:239] Using reconciler: lease
	W1221 21:05:17.046190       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1221 21:05:17.047382       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:18.016721       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:18.016725       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:18.048671       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:19.350806       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:19.638643       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:19.929940       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:21.734787       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:21.990843       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:22.100754       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:25.681730       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:26.048935       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:26.090795       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:31.172254       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:33.337426       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:33.718475       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1221 21:05:37.046838       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [68d794a227631af73b57a6be3b7b3d1ecad90e9b5855855f7d4ce6432c3149a6] <==
	I1221 21:05:43.433272       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1221 21:05:43.436952       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1221 21:05:43.437947       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1221 21:05:43.438063       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1221 21:05:43.438098       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1221 21:05:43.438483       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1221 21:05:43.440814       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1221 21:05:43.440858       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1221 21:05:43.440876       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1221 21:05:43.441124       1 aggregator.go:171] initial CRD sync complete...
	I1221 21:05:43.441134       1 autoregister_controller.go:144] Starting autoregister controller
	I1221 21:05:43.441140       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1221 21:05:43.441149       1 cache.go:39] Caches are synced for autoregister controller
	E1221 21:05:43.450193       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1221 21:05:43.471026       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1221 21:05:43.480498       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1221 21:05:43.762357       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1221 21:05:44.236022       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1221 21:05:44.921193       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1221 21:05:44.997493       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1221 21:05:45.037216       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 21:05:45.045100       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1221 21:05:46.940226       1 controller.go:667] quota admission added evaluator for: endpoints
	I1221 21:05:47.039238       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 21:05:53.743984       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [6309a5049c66ec0af407317a4d7d6fe72a27a5fac495877a8d5c3c390bdd9aba] <==
	I1221 21:05:46.812196       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1221 21:05:46.814393       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1221 21:05:46.818102       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1221 21:05:46.820470       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1221 21:05:46.822790       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1221 21:05:46.826007       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1221 21:05:46.828374       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1221 21:05:46.829550       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1221 21:05:46.832102       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1221 21:05:46.833758       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1221 21:05:46.833850       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1221 21:05:46.833857       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1221 21:05:46.833864       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1221 21:05:46.834183       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1221 21:05:46.834493       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1221 21:05:46.834765       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1221 21:05:46.836903       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1221 21:05:46.837038       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1221 21:05:46.837105       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-471447"
	I1221 21:05:46.837155       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1221 21:05:46.839897       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1221 21:05:46.841647       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1221 21:05:46.844320       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1221 21:05:46.847774       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1221 21:05:46.847967       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [93325ef162c038b4a11c0bda41762a5caaaad854c9249abeb4364083c5410000] <==
	
	
	==> kube-proxy [1b7e7ebf7d263f3f3630350b9e77caa2d7ef837752fcd7c507ef35d28b2baf82] <==
	I1221 21:03:58.208567       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1221 21:03:58.310607       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1221 21:03:58.310655       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.123"]
	E1221 21:03:58.310731       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 21:03:58.473395       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1221 21:03:58.473569       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1221 21:03:58.473629       1 server_linux.go:132] "Using iptables Proxier"
	I1221 21:03:58.498235       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 21:03:58.503090       1 server.go:527] "Version info" version="v1.34.3"
	I1221 21:03:58.504364       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 21:03:58.524217       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 21:03:58.524370       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 21:03:58.524815       1 config.go:200] "Starting service config controller"
	I1221 21:03:58.524829       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 21:03:58.524926       1 config.go:106] "Starting endpoint slice config controller"
	I1221 21:03:58.524936       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 21:03:58.524958       1 config.go:309] "Starting node config controller"
	I1221 21:03:58.524970       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 21:03:58.627554       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1221 21:03:58.628377       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1221 21:03:58.628411       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1221 21:03:58.628429       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [f6abb2a527255ca4a3945395fc1c553575631f1220fbbd717a3483a877bdea06] <==
	E1221 21:05:40.249824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-471447&limit=500&resourceVersion=0\": dial tcp 192.168.94.123:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1221 21:05:45.245706       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1221 21:05:45.245759       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.123"]
	E1221 21:05:45.245841       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 21:05:45.293031       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1221 21:05:45.293089       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1221 21:05:45.293122       1 server_linux.go:132] "Using iptables Proxier"
	I1221 21:05:45.303470       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 21:05:45.304433       1 server.go:527] "Version info" version="v1.34.3"
	I1221 21:05:45.304461       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 21:05:45.318795       1 config.go:200] "Starting service config controller"
	I1221 21:05:45.318830       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 21:05:45.318847       1 config.go:106] "Starting endpoint slice config controller"
	I1221 21:05:45.318850       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 21:05:45.318866       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 21:05:45.318869       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 21:05:45.321793       1 config.go:309] "Starting node config controller"
	I1221 21:05:45.321825       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 21:05:45.321832       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1221 21:05:45.418995       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1221 21:05:45.419173       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1221 21:05:45.419186       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4fa30cec5ae5968b7ebe6b44147a15357ae41e2146d8f311d2b6e66657590231] <==
	I1221 21:05:42.174177       1 serving.go:386] Generated self-signed cert in-memory
	I1221 21:05:43.414929       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1221 21:05:43.414968       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 21:05:43.424487       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1221 21:05:43.424531       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1221 21:05:43.424567       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 21:05:43.424573       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 21:05:43.424592       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1221 21:05:43.424614       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1221 21:05:43.424827       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1221 21:05:43.424880       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1221 21:05:43.525576       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1221 21:05:43.525632       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1221 21:05:43.525717       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [d354cf813045d0f0a1ebe95d2de31a795d6393130b097c2e6677aff705ff6fb0] <==
	I1221 21:05:17.456204       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Dec 21 21:05:40 pause-471447 kubelet[3884]: E1221 21:05:40.833607    3884 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-471447\" not found" node="pause-471447"
	Dec 21 21:05:41 pause-471447 kubelet[3884]: I1221 21:05:41.315821    3884 kubelet_node_status.go:75] "Attempting to register node" node="pause-471447"
	Dec 21 21:05:41 pause-471447 kubelet[3884]: E1221 21:05:41.839611    3884 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-471447\" not found" node="pause-471447"
	Dec 21 21:05:41 pause-471447 kubelet[3884]: E1221 21:05:41.840931    3884 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-471447\" not found" node="pause-471447"
	Dec 21 21:05:41 pause-471447 kubelet[3884]: E1221 21:05:41.841389    3884 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-471447\" not found" node="pause-471447"
	Dec 21 21:05:41 pause-471447 kubelet[3884]: E1221 21:05:41.841527    3884 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-471447\" not found" node="pause-471447"
	Dec 21 21:05:42 pause-471447 kubelet[3884]: E1221 21:05:42.842130    3884 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-471447\" not found" node="pause-471447"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: I1221 21:05:43.376540    3884 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-471447"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: I1221 21:05:43.494095    3884 kubelet_node_status.go:124] "Node was previously registered" node="pause-471447"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: I1221 21:05:43.494211    3884 kubelet_node_status.go:78] "Successfully registered node" node="pause-471447"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: I1221 21:05:43.494236    3884 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: I1221 21:05:43.496197    3884 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: E1221 21:05:43.516082    3884 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-pause-471447\" already exists" pod="kube-system/etcd-pause-471447"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: I1221 21:05:43.516109    3884 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-471447"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: E1221 21:05:43.524736    3884 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-471447\" already exists" pod="kube-system/kube-apiserver-pause-471447"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: I1221 21:05:43.524767    3884 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-471447"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: E1221 21:05:43.534825    3884 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-471447\" already exists" pod="kube-system/kube-controller-manager-pause-471447"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: I1221 21:05:43.534849    3884 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-471447"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: E1221 21:05:43.544017    3884 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-471447\" already exists" pod="kube-system/kube-scheduler-pause-471447"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: I1221 21:05:43.651547    3884 apiserver.go:52] "Watching apiserver"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: I1221 21:05:43.675733    3884 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: I1221 21:05:43.748831    3884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6-xtables-lock\") pod \"kube-proxy-76nfp\" (UID: \"b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6\") " pod="kube-system/kube-proxy-76nfp"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: I1221 21:05:43.748903    3884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6-lib-modules\") pod \"kube-proxy-76nfp\" (UID: \"b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6\") " pod="kube-system/kube-proxy-76nfp"
	Dec 21 21:05:49 pause-471447 kubelet[3884]: E1221 21:05:49.815362    3884 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766351149813816324  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:128011}  inodes_used:{value:57}}"
	Dec 21 21:05:49 pause-471447 kubelet[3884]: E1221 21:05:49.815427    3884 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766351149813816324  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:128011}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-471447 -n pause-471447
helpers_test.go:270: (dbg) Run:  kubectl --context pause-471447 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-471447 -n pause-471447
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-471447 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-471447 logs -n 25: (1.363625866s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────────────
──┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────────────
──┤
	│ ssh     │ -p cilium-340687 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ cilium-340687             │ jenkins │ v1.37.0 │ 21 Dec 25 21:01 UTC │                     │
	│ ssh     │ -p cilium-340687 sudo crio config                                                                                                                                                                                                           │ cilium-340687             │ jenkins │ v1.37.0 │ 21 Dec 25 21:01 UTC │                     │
	│ delete  │ -p cilium-340687                                                                                                                                                                                                                            │ cilium-340687             │ jenkins │ v1.37.0 │ 21 Dec 25 21:01 UTC │ 21 Dec 25 21:01 UTC │
	│ start   │ -p guest-667849 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                                                                                                     │ guest-667849              │ jenkins │ v1.37.0 │ 21 Dec 25 21:01 UTC │ 21 Dec 25 21:02 UTC │
	│ delete  │ -p force-systemd-env-764266                                                                                                                                                                                                                 │ force-systemd-env-764266  │ jenkins │ v1.37.0 │ 21 Dec 25 21:02 UTC │ 21 Dec 25 21:02 UTC │
	│ start   │ -p cert-expiration-514100 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                                        │ cert-expiration-514100    │ jenkins │ v1.37.0 │ 21 Dec 25 21:02 UTC │ 21 Dec 25 21:02 UTC │
	│ start   │ -p force-systemd-flag-048347 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                                   │ force-systemd-flag-048347 │ jenkins │ v1.37.0 │ 21 Dec 25 21:02 UTC │ 21 Dec 25 21:03 UTC │
	│ image   │ test-preload-759510 image pull public.ecr.aws/docker/library/busybox:latest                                                                                                                                                                 │ test-preload-759510       │ jenkins │ v1.37.0 │ 21 Dec 25 21:02 UTC │ 21 Dec 25 21:02 UTC │
	│ stop    │ -p test-preload-759510                                                                                                                                                                                                                      │ test-preload-759510       │ jenkins │ v1.37.0 │ 21 Dec 25 21:02 UTC │ 21 Dec 25 21:02 UTC │
	│ start   │ -p test-preload-759510 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                                                                                                            │ test-preload-759510       │ jenkins │ v1.37.0 │ 21 Dec 25 21:02 UTC │ 21 Dec 25 21:03 UTC │
	│ ssh     │ force-systemd-flag-048347 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                        │ force-systemd-flag-048347 │ jenkins │ v1.37.0 │ 21 Dec 25 21:03 UTC │ 21 Dec 25 21:03 UTC │
	│ delete  │ -p force-systemd-flag-048347                                                                                                                                                                                                                │ force-systemd-flag-048347 │ jenkins │ v1.37.0 │ 21 Dec 25 21:03 UTC │ 21 Dec 25 21:03 UTC │
	│ start   │ -p pause-471447 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                                     │ pause-471447              │ jenkins │ v1.37.0 │ 21 Dec 25 21:03 UTC │ 21 Dec 25 21:04 UTC │
	│ image   │ test-preload-759510 image list                                                                                                                                                                                                              │ test-preload-759510       │ jenkins │ v1.37.0 │ 21 Dec 25 21:03 UTC │ 21 Dec 25 21:03 UTC │
	│ delete  │ -p test-preload-759510                                                                                                                                                                                                                      │ test-preload-759510       │ jenkins │ v1.37.0 │ 21 Dec 25 21:03 UTC │ 21 Dec 25 21:03 UTC │
	│ start   │ -p cert-options-764127 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio                     │ cert-options-764127       │ jenkins │ v1.37.0 │ 21 Dec 25 21:03 UTC │ 21 Dec 25 21:04 UTC │
	│ ssh     │ cert-options-764127 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                 │ cert-options-764127       │ jenkins │ v1.37.0 │ 21 Dec 25 21:04 UTC │ 21 Dec 25 21:04 UTC │
	│ ssh     │ -p cert-options-764127 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                               │ cert-options-764127       │ jenkins │ v1.37.0 │ 21 Dec 25 21:04 UTC │ 21 Dec 25 21:04 UTC │
	│ delete  │ -p cert-options-764127                                                                                                                                                                                                                      │ cert-options-764127       │ jenkins │ v1.37.0 │ 21 Dec 25 21:04 UTC │ 21 Dec 25 21:04 UTC │
	│ start   │ -p old-k8s-version-458928 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-458928    │ jenkins │ v1.37.0 │ 21 Dec 25 21:04 UTC │ 21 Dec 25 21:05 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-787082 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                 │ running-upgrade-787082    │ jenkins │ v1.37.0 │ 21 Dec 25 21:04 UTC │                     │
	│ delete  │ -p running-upgrade-787082                                                                                                                                                                                                                   │ running-upgrade-787082    │ jenkins │ v1.37.0 │ 21 Dec 25 21:04 UTC │ 21 Dec 25 21:04 UTC │
	│ start   │ -p no-preload-419917 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-419917         │ jenkins │ v1.37.0 │ 21 Dec 25 21:04 UTC │                     │
	│ start   │ -p pause-471447 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                                              │ pause-471447              │ jenkins │ v1.37.0 │ 21 Dec 25 21:04 UTC │ 21 Dec 25 21:05 UTC │
	│ start   │ -p cert-expiration-514100 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                                                                                                     │ cert-expiration-514100    │ jenkins │ v1.37.0 │ 21 Dec 25 21:05 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────────────
──┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 21:05:49
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 21:05:49.819786  164091 out.go:360] Setting OutFile to fd 1 ...
	I1221 21:05:49.819873  164091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 21:05:49.819876  164091 out.go:374] Setting ErrFile to fd 2...
	I1221 21:05:49.819879  164091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 21:05:49.820065  164091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
	I1221 21:05:49.820576  164091 out.go:368] Setting JSON to false
	I1221 21:05:49.821503  164091 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":17300,"bootTime":1766333850,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 21:05:49.821561  164091 start.go:143] virtualization: kvm guest
	I1221 21:05:49.823745  164091 out.go:179] * [cert-expiration-514100] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 21:05:49.825033  164091 notify.go:221] Checking for updates...
	I1221 21:05:49.825063  164091 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 21:05:49.826418  164091 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 21:05:49.827836  164091 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	I1221 21:05:49.829071  164091 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	I1221 21:05:49.830383  164091 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 21:05:49.831621  164091 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 21:05:49.833341  164091 config.go:182] Loaded profile config "cert-expiration-514100": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 21:05:49.834034  164091 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 21:05:49.871631  164091 out.go:179] * Using the kvm2 driver based on existing profile
	I1221 21:05:49.872952  164091 start.go:309] selected driver: kvm2
	I1221 21:05:49.872975  164091 start.go:928] validating driver "kvm2" against &{Name:cert-expiration-514100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.34.3 ClusterName:cert-expiration-514100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.159 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 21:05:49.873076  164091 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 21:05:49.874154  164091 cni.go:84] Creating CNI manager for ""
	I1221 21:05:49.874208  164091 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1221 21:05:49.874239  164091 start.go:353] cluster config:
	{Name:cert-expiration-514100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:cert-expiration-514100 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.159 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 21:05:49.874324  164091 iso.go:125] acquiring lock: {Name:mk32aed4917b82431a8f5160a35db6118385a2c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 21:05:49.876080  164091 out.go:179] * Starting "cert-expiration-514100" primary control-plane node in "cert-expiration-514100" cluster
	I1221 21:05:49.877351  164091 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 21:05:49.877377  164091 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-122429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1221 21:05:49.877391  164091 cache.go:65] Caching tarball of preloaded images
	I1221 21:05:49.877510  164091 preload.go:251] Found /home/jenkins/minikube-integration/22179-122429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 21:05:49.877517  164091 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1221 21:05:49.877594  164091 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/cert-expiration-514100/config.json ...
	I1221 21:05:49.877797  164091 start.go:360] acquireMachinesLock for cert-expiration-514100: {Name:mkd449b545e9165e82ce02652c0c22eb5894063b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1221 21:05:49.877840  164091 start.go:364] duration metric: took 31.049µs to acquireMachinesLock for "cert-expiration-514100"
	I1221 21:05:49.877851  164091 start.go:96] Skipping create...Using existing machine configuration
	I1221 21:05:49.877855  164091 fix.go:54] fixHost starting: 
	I1221 21:05:49.879818  164091 fix.go:112] recreateIfNeeded on cert-expiration-514100: state=Running err=<nil>
	W1221 21:05:49.879839  164091 fix.go:138] unexpected machine state, will restart: <nil>
	W1221 21:05:46.306982  163342 pod_ready.go:104] pod "coredns-5dd5756b68-xp8fg" is not "Ready", error: <nil>
	W1221 21:05:48.806203  163342 pod_ready.go:104] pod "coredns-5dd5756b68-xp8fg" is not "Ready", error: <nil>
	I1221 21:05:45.319836  163580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 21:05:45.338375  163580 node_ready.go:35] waiting up to 6m0s for node "pause-471447" to be "Ready" ...
	I1221 21:05:45.343334  163580 node_ready.go:49] node "pause-471447" is "Ready"
	I1221 21:05:45.343365  163580 node_ready.go:38] duration metric: took 4.936763ms for node "pause-471447" to be "Ready" ...
	I1221 21:05:45.343379  163580 api_server.go:52] waiting for apiserver process to appear ...
	I1221 21:05:45.343429  163580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 21:05:45.364211  163580 api_server.go:72] duration metric: took 260.373622ms to wait for apiserver process to appear ...
	I1221 21:05:45.364242  163580 api_server.go:88] waiting for apiserver healthz status ...
	I1221 21:05:45.364271  163580 api_server.go:253] Checking apiserver healthz at https://192.168.94.123:8443/healthz ...
	I1221 21:05:45.370604  163580 api_server.go:279] https://192.168.94.123:8443/healthz returned 200:
	ok
	I1221 21:05:45.372216  163580 api_server.go:141] control plane version: v1.34.3
	I1221 21:05:45.372246  163580 api_server.go:131] duration metric: took 7.995465ms to wait for apiserver health ...
	I1221 21:05:45.372277  163580 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 21:05:45.376527  163580 system_pods.go:59] 6 kube-system pods found
	I1221 21:05:45.376561  163580 system_pods.go:61] "coredns-66bc5c9577-4fcrq" [9a867251-b10f-44e7-ada2-057f1bb6273e] Running
	I1221 21:05:45.376580  163580 system_pods.go:61] "etcd-pause-471447" [a9e0adb8-8533-49ab-a3f3-e1d7b67590a3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 21:05:45.376590  163580 system_pods.go:61] "kube-apiserver-pause-471447" [676e5648-fd2a-43a8-833b-72a4e83a298a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 21:05:45.376605  163580 system_pods.go:61] "kube-controller-manager-pause-471447" [369471f4-ef0b-4855-b109-4ac8fede00e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 21:05:45.376610  163580 system_pods.go:61] "kube-proxy-76nfp" [b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6] Running
	I1221 21:05:45.376619  163580 system_pods.go:61] "kube-scheduler-pause-471447" [1e76d4a2-86f6-4eaf-b970-bd879f047ec3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 21:05:45.376632  163580 system_pods.go:74] duration metric: took 4.344471ms to wait for pod list to return data ...
	I1221 21:05:45.376647  163580 default_sa.go:34] waiting for default service account to be created ...
	I1221 21:05:45.380135  163580 default_sa.go:45] found service account: "default"
	I1221 21:05:45.380169  163580 default_sa.go:55] duration metric: took 3.510695ms for default service account to be created ...
	I1221 21:05:45.380178  163580 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 21:05:45.382916  163580 system_pods.go:86] 6 kube-system pods found
	I1221 21:05:45.382942  163580 system_pods.go:89] "coredns-66bc5c9577-4fcrq" [9a867251-b10f-44e7-ada2-057f1bb6273e] Running
	I1221 21:05:45.382951  163580 system_pods.go:89] "etcd-pause-471447" [a9e0adb8-8533-49ab-a3f3-e1d7b67590a3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 21:05:45.382957  163580 system_pods.go:89] "kube-apiserver-pause-471447" [676e5648-fd2a-43a8-833b-72a4e83a298a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 21:05:45.382964  163580 system_pods.go:89] "kube-controller-manager-pause-471447" [369471f4-ef0b-4855-b109-4ac8fede00e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 21:05:45.382969  163580 system_pods.go:89] "kube-proxy-76nfp" [b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6] Running
	I1221 21:05:45.382974  163580 system_pods.go:89] "kube-scheduler-pause-471447" [1e76d4a2-86f6-4eaf-b970-bd879f047ec3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 21:05:45.382981  163580 system_pods.go:126] duration metric: took 2.79789ms to wait for k8s-apps to be running ...
	I1221 21:05:45.382989  163580 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 21:05:45.383044  163580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 21:05:45.408505  163580 system_svc.go:56] duration metric: took 25.488943ms WaitForService to wait for kubelet
	I1221 21:05:45.408545  163580 kubeadm.go:587] duration metric: took 304.71232ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 21:05:45.408572  163580 node_conditions.go:102] verifying NodePressure condition ...
	I1221 21:05:45.412571  163580 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1221 21:05:45.412604  163580 node_conditions.go:123] node cpu capacity is 2
	I1221 21:05:45.412623  163580 node_conditions.go:105] duration metric: took 4.044046ms to run NodePressure ...
	I1221 21:05:45.412640  163580 start.go:242] waiting for startup goroutines ...
	I1221 21:05:45.412650  163580 start.go:247] waiting for cluster config update ...
	I1221 21:05:45.412667  163580 start.go:256] writing updated cluster config ...
	I1221 21:05:45.413130  163580 ssh_runner.go:195] Run: rm -f paused
	I1221 21:05:45.421201  163580 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 21:05:45.422288  163580 kapi.go:59] client config for pause-471447: &rest.Config{Host:"https://192.168.94.123:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-122429/.minikube/profiles/pause-471447/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-122429/.minikube/profiles/pause-471447/client.key", CAFile:"/home/jenkins/minikube-integration/22179-122429/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[
]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2867280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1221 21:05:45.425820  163580 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4fcrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:45.432608  163580 pod_ready.go:94] pod "coredns-66bc5c9577-4fcrq" is "Ready"
	I1221 21:05:45.432637  163580 pod_ready.go:86] duration metric: took 6.794741ms for pod "coredns-66bc5c9577-4fcrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:45.439893  163580 pod_ready.go:83] waiting for pod "etcd-pause-471447" in "kube-system" namespace to be "Ready" or be gone ...
	W1221 21:05:47.446314  163580 pod_ready.go:104] pod "etcd-pause-471447" is not "Ready", error: <nil>
	I1221 21:05:47.946878  163580 pod_ready.go:94] pod "etcd-pause-471447" is "Ready"
	I1221 21:05:47.946906  163580 pod_ready.go:86] duration metric: took 2.506984997s for pod "etcd-pause-471447" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:47.949427  163580 pod_ready.go:83] waiting for pod "kube-apiserver-pause-471447" in "kube-system" namespace to be "Ready" or be gone ...
	W1221 21:05:49.955512  163580 pod_ready.go:104] pod "kube-apiserver-pause-471447" is not "Ready", error: <nil>
	W1221 21:05:50.555382  163553 pod_ready.go:104] pod "coredns-7d764666f9-ccdcv" is not "Ready", error: <nil>
	W1221 21:05:53.052849  163553 pod_ready.go:104] pod "coredns-7d764666f9-ccdcv" is not "Ready", error: <nil>
	I1221 21:05:49.881523  164091 out.go:252] * Updating the running kvm2 "cert-expiration-514100" VM ...
	I1221 21:05:49.881543  164091 machine.go:94] provisionDockerMachine start ...
	I1221 21:05:49.884035  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:49.884529  164091 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:31:ed:0b", ip: ""} in network mk-cert-expiration-514100: {Iface:virbr2 ExpiryTime:2025-12-21 22:02:27 +0000 UTC Type:0 Mac:52:54:00:31:ed:0b Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:cert-expiration-514100 Clientid:01:52:54:00:31:ed:0b}
	I1221 21:05:49.884545  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined IP address 192.168.50.159 and MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:49.884714  164091 main.go:144] libmachine: Using SSH client type: native
	I1221 21:05:49.884918  164091 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I1221 21:05:49.884923  164091 main.go:144] libmachine: About to run SSH command:
	hostname
	I1221 21:05:49.996251  164091 main.go:144] libmachine: SSH cmd err, output: <nil>: cert-expiration-514100
	
	I1221 21:05:49.996275  164091 buildroot.go:166] provisioning hostname "cert-expiration-514100"
	I1221 21:05:49.999972  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:50.000463  164091 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:31:ed:0b", ip: ""} in network mk-cert-expiration-514100: {Iface:virbr2 ExpiryTime:2025-12-21 22:02:27 +0000 UTC Type:0 Mac:52:54:00:31:ed:0b Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:cert-expiration-514100 Clientid:01:52:54:00:31:ed:0b}
	I1221 21:05:50.000510  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined IP address 192.168.50.159 and MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:50.000707  164091 main.go:144] libmachine: Using SSH client type: native
	I1221 21:05:50.000991  164091 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I1221 21:05:50.001003  164091 main.go:144] libmachine: About to run SSH command:
	sudo hostname cert-expiration-514100 && echo "cert-expiration-514100" | sudo tee /etc/hostname
	I1221 21:05:50.129769  164091 main.go:144] libmachine: SSH cmd err, output: <nil>: cert-expiration-514100
	
	I1221 21:05:50.133241  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:50.133734  164091 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:31:ed:0b", ip: ""} in network mk-cert-expiration-514100: {Iface:virbr2 ExpiryTime:2025-12-21 22:02:27 +0000 UTC Type:0 Mac:52:54:00:31:ed:0b Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:cert-expiration-514100 Clientid:01:52:54:00:31:ed:0b}
	I1221 21:05:50.133762  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined IP address 192.168.50.159 and MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:50.133926  164091 main.go:144] libmachine: Using SSH client type: native
	I1221 21:05:50.134123  164091 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I1221 21:05:50.134132  164091 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-514100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-514100/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-514100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 21:05:50.239330  164091 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 21:05:50.239351  164091 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22179-122429/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-122429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-122429/.minikube}
	I1221 21:05:50.239370  164091 buildroot.go:174] setting up certificates
	I1221 21:05:50.239379  164091 provision.go:84] configureAuth start
	I1221 21:05:50.242428  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:50.242772  164091 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:31:ed:0b", ip: ""} in network mk-cert-expiration-514100: {Iface:virbr2 ExpiryTime:2025-12-21 22:02:27 +0000 UTC Type:0 Mac:52:54:00:31:ed:0b Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:cert-expiration-514100 Clientid:01:52:54:00:31:ed:0b}
	I1221 21:05:50.242789  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined IP address 192.168.50.159 and MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:50.245046  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:50.245414  164091 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:31:ed:0b", ip: ""} in network mk-cert-expiration-514100: {Iface:virbr2 ExpiryTime:2025-12-21 22:02:27 +0000 UTC Type:0 Mac:52:54:00:31:ed:0b Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:cert-expiration-514100 Clientid:01:52:54:00:31:ed:0b}
	I1221 21:05:50.245431  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined IP address 192.168.50.159 and MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:50.245558  164091 provision.go:143] copyHostCerts
	I1221 21:05:50.245618  164091 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-122429/.minikube/ca.pem, removing ...
	I1221 21:05:50.245629  164091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-122429/.minikube/ca.pem
	I1221 21:05:50.245702  164091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-122429/.minikube/ca.pem (1082 bytes)
	I1221 21:05:50.245835  164091 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-122429/.minikube/cert.pem, removing ...
	I1221 21:05:50.245839  164091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-122429/.minikube/cert.pem
	I1221 21:05:50.245867  164091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-122429/.minikube/cert.pem (1123 bytes)
	I1221 21:05:50.245916  164091 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-122429/.minikube/key.pem, removing ...
	I1221 21:05:50.245919  164091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-122429/.minikube/key.pem
	I1221 21:05:50.245939  164091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-122429/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-122429/.minikube/key.pem (1679 bytes)
	I1221 21:05:50.245993  164091 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-122429/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-514100 san=[127.0.0.1 192.168.50.159 cert-expiration-514100 localhost minikube]
	I1221 21:05:50.313899  164091 provision.go:177] copyRemoteCerts
	I1221 21:05:50.313981  164091 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 21:05:50.317055  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:50.317516  164091 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:31:ed:0b", ip: ""} in network mk-cert-expiration-514100: {Iface:virbr2 ExpiryTime:2025-12-21 22:02:27 +0000 UTC Type:0 Mac:52:54:00:31:ed:0b Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:cert-expiration-514100 Clientid:01:52:54:00:31:ed:0b}
	I1221 21:05:50.317539  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined IP address 192.168.50.159 and MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:50.317694  164091 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/cert-expiration-514100/id_rsa Username:docker}
	I1221 21:05:50.401919  164091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1221 21:05:50.439534  164091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1221 21:05:50.476843  164091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-122429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1221 21:05:50.516573  164091 provision.go:87] duration metric: took 277.178435ms to configureAuth
	I1221 21:05:50.516600  164091 buildroot.go:189] setting minikube options for container-runtime
	I1221 21:05:50.516857  164091 config.go:182] Loaded profile config "cert-expiration-514100": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 21:05:50.520307  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:50.520752  164091 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:31:ed:0b", ip: ""} in network mk-cert-expiration-514100: {Iface:virbr2 ExpiryTime:2025-12-21 22:02:27 +0000 UTC Type:0 Mac:52:54:00:31:ed:0b Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:cert-expiration-514100 Clientid:01:52:54:00:31:ed:0b}
	I1221 21:05:50.520767  164091 main.go:144] libmachine: domain cert-expiration-514100 has defined IP address 192.168.50.159 and MAC address 52:54:00:31:ed:0b in network mk-cert-expiration-514100
	I1221 21:05:50.520960  164091 main.go:144] libmachine: Using SSH client type: native
	I1221 21:05:50.521161  164091 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I1221 21:05:50.521168  164091 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1221 21:05:50.808612  163342 pod_ready.go:104] pod "coredns-5dd5756b68-xp8fg" is not "Ready", error: <nil>
	W1221 21:05:53.307693  163342 pod_ready.go:104] pod "coredns-5dd5756b68-xp8fg" is not "Ready", error: <nil>
	W1221 21:05:51.956729  163580 pod_ready.go:104] pod "kube-apiserver-pause-471447" is not "Ready", error: <nil>
	W1221 21:05:54.456455  163580 pod_ready.go:104] pod "kube-apiserver-pause-471447" is not "Ready", error: <nil>
	I1221 21:05:56.456127  163580 pod_ready.go:94] pod "kube-apiserver-pause-471447" is "Ready"
	I1221 21:05:56.456166  163580 pod_ready.go:86] duration metric: took 8.506718193s for pod "kube-apiserver-pause-471447" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:56.459636  163580 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-471447" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:56.466907  163580 pod_ready.go:94] pod "kube-controller-manager-pause-471447" is "Ready"
	I1221 21:05:56.466950  163580 pod_ready.go:86] duration metric: took 7.273093ms for pod "kube-controller-manager-pause-471447" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:56.470653  163580 pod_ready.go:83] waiting for pod "kube-proxy-76nfp" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:56.477292  163580 pod_ready.go:94] pod "kube-proxy-76nfp" is "Ready"
	I1221 21:05:56.477335  163580 pod_ready.go:86] duration metric: took 6.644759ms for pod "kube-proxy-76nfp" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:56.479545  163580 pod_ready.go:83] waiting for pod "kube-scheduler-pause-471447" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:56.654729  163580 pod_ready.go:94] pod "kube-scheduler-pause-471447" is "Ready"
	I1221 21:05:56.654767  163580 pod_ready.go:86] duration metric: took 175.190158ms for pod "kube-scheduler-pause-471447" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:56.654785  163580 pod_ready.go:40] duration metric: took 11.233547255s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 21:05:56.717504  163580 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1221 21:05:56.719611  163580 out.go:179] * Done! kubectl is now configured to use "pause-471447" cluster and "default" namespace by default
	I1221 21:05:55.307100  163342 pod_ready.go:94] pod "coredns-5dd5756b68-xp8fg" is "Ready"
	I1221 21:05:55.307138  163342 pod_ready.go:86] duration metric: took 27.507559094s for pod "coredns-5dd5756b68-xp8fg" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:55.311173  163342 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-458928" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:55.317617  163342 pod_ready.go:94] pod "etcd-old-k8s-version-458928" is "Ready"
	I1221 21:05:55.317648  163342 pod_ready.go:86] duration metric: took 6.448841ms for pod "etcd-old-k8s-version-458928" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:55.321218  163342 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-458928" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:55.333479  163342 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-458928" is "Ready"
	I1221 21:05:55.333532  163342 pod_ready.go:86] duration metric: took 12.285995ms for pod "kube-apiserver-old-k8s-version-458928" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:55.336721  163342 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-458928" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:55.503187  163342 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-458928" is "Ready"
	I1221 21:05:55.503219  163342 pod_ready.go:86] duration metric: took 166.467785ms for pod "kube-controller-manager-old-k8s-version-458928" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:55.707058  163342 pod_ready.go:83] waiting for pod "kube-proxy-6d8w8" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:56.104404  163342 pod_ready.go:94] pod "kube-proxy-6d8w8" is "Ready"
	I1221 21:05:56.104445  163342 pod_ready.go:86] duration metric: took 397.344523ms for pod "kube-proxy-6d8w8" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:56.305399  163342 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-458928" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:56.704616  163342 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-458928" is "Ready"
	I1221 21:05:56.704655  163342 pod_ready.go:86] duration metric: took 399.224899ms for pod "kube-scheduler-old-k8s-version-458928" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 21:05:56.704674  163342 pod_ready.go:40] duration metric: took 38.915597646s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 21:05:56.768132  163342 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1221 21:05:56.770304  163342 out.go:203] 
	W1221 21:05:56.771836  163342 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1221 21:05:56.773421  163342 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1221 21:05:56.775154  163342 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-458928" cluster and "default" namespace by default
	W1221 21:05:55.053397  163553 pod_ready.go:104] pod "coredns-7d764666f9-ccdcv" is not "Ready", error: <nil>
	W1221 21:05:57.056641  163553 pod_ready.go:104] pod "coredns-7d764666f9-ccdcv" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.594718457Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5913bddb-bf84-45db-b433-9a9c34a28dcd name=/runtime.v1.RuntimeService/Version
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.597276314Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=545a895c-6356-40db-a815-908606f27cbd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.600059440Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766351159600022590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=545a895c-6356-40db-a815-908606f27cbd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.602227193Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f1f95d4-6441-4f19-83be-1bddd6754abd name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.602717365Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f1f95d4-6441-4f19-83be-1bddd6754abd name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.603433956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6309a5049c66ec0af407317a4d7d6fe72a27a5fac495877a8d5c3c390bdd9aba,PodSandboxId:4cfdc2729aa10488f854e263f52bcfa12d34a337a2b88c0e8537d918b5a3aa58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766351140165246862,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ceefc332265c98cad91e49ee4ea553,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68d794a227631af73b57a6be3b7b3d1ecad90e9b5855855f7d4ce6432c3149a6,PodSandboxId:c24196a03cc2030e935a61b37742c373492eaad0288a23aef8a03b0c3074f573,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766351140168986935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90561bae391763380a8abda94df97fd4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fa30cec5ae5968b7ebe6b44147a15357ae41e2146d8f311d2b6e66657590231,PodSandboxId:799a0567b6a81f6d2df124907d71bc1b13a8ff4652b14ba39775f7b1aa0dfcaa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766351140156438675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-471447,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce57c2667109ff3779565d235787c36,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:756deeeab4bb7a3d6b9eef2125df3b7e54bb881ac34b916250873bad24af3d8a,PodSandboxId:56918331a3709800b1a3cd157c50936a8ad547d2a5bdfc14f71dd7d4e5029d2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766351136852567053,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1053229f473c15a0d5de19f4163b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6abb2a527255ca4a3945395fc1c553575631f1220fbbd717a3483a877bdea06,PodSandboxId:68d45ff2273c2f4ee29ad02fc7a91e53dd516be05cd3cd7d4e5cb0e287409515,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Sta
te:CONTAINER_RUNNING,CreatedAt:1766351115220273113,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76nfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:523955d2d2268a2b609203c9380984ff164947649220d7efa18b665d6195dbcf,PodSandboxId:60749c7bcea3bf072bc7098ce7b85e484b59d718ea3b2a5cb24098f7975aa2c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17663
51116231558123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4fcrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a867251-b10f-44e7-ada2-057f1bb6273e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bafe84bead083c74c92a835751b6b8106dfba6e16ae4608d0a10f6810545a603,PodSandboxId:56918331a3709800b1a3cd157c50936a8ad547d2a5bdfc
14f71dd7d4e5029d2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766351115107462823,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1053229f473c15a0d5de19f4163b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93325ef162c038b4a11c0bda41762a5caaaad854c9249abeb436
4083c5410000,PodSandboxId:4cfdc2729aa10488f854e263f52bcfa12d34a337a2b88c0e8537d918b5a3aa58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766351115172522223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ceefc332265c98cad91e49ee4ea553,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d354cf813045d0f0a1ebe95d2de31a795d6393130b097c2e6677aff705ff6fb0,PodSandboxId:799a0567b6a81f6d2df124907d71bc1b13a8ff4652b14ba39775f7b1aa0dfcaa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766351114946873908,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce57c2667109ff3779565d235787c36,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f5cbc0350f6be6538794d454563d9d2e4dd4797d5a2ccec8d8bbc760f105b4,PodSandboxId:c24196a03cc2030e935a61b37742c373492eaad0288a23aef8a03b0c3074f573,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766351114834656864,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90561bae391763380a8abda94df97fd4,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74d0e9192c754ede5d957fc537828b6c07fb9165277516435bc22722ee06dd6,PodSandboxId:73296b379ad7cb48c183b8af7c79381bb07f02f40043251dee7614d396261b3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766351038358762042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4fcrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a867251-b10f-44e7-ada2-057f1bb6273e,},Annotations:map[string]string{io.kubernetes
.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b7e7ebf7d263f3f3630350b9e77caa2d7ef837752fcd7c507ef35d28b2baf82,PodSandboxId:e0deaffec26ef529ec7d9ee5a18a1bea113d79b9d0ee350707edc4e529961ccd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e4948
0fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766351037542998480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76nfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f1f95d4-6441-4f19-83be-1bddd6754abd name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.663269882Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=154ad4fa-bc0f-486a-881b-c67eaff8b851 name=/runtime.v1.RuntimeService/Version
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.663412564Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=154ad4fa-bc0f-486a-881b-c67eaff8b851 name=/runtime.v1.RuntimeService/Version
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.664645845Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a2ff4cd1-c295-452f-856c-4fd150c6c53f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.665027930Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766351159665007678,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a2ff4cd1-c295-452f-856c-4fd150c6c53f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.666162071Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb747299-349a-4442-9f48-d90c7d4da632 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.666244739Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb747299-349a-4442-9f48-d90c7d4da632 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.667234980Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6309a5049c66ec0af407317a4d7d6fe72a27a5fac495877a8d5c3c390bdd9aba,PodSandboxId:4cfdc2729aa10488f854e263f52bcfa12d34a337a2b88c0e8537d918b5a3aa58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766351140165246862,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ceefc332265c98cad91e49ee4ea553,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68d794a227631af73b57a6be3b7b3d1ecad90e9b5855855f7d4ce6432c3149a6,PodSandboxId:c24196a03cc2030e935a61b37742c373492eaad0288a23aef8a03b0c3074f573,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766351140168986935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90561bae391763380a8abda94df97fd4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fa30cec5ae5968b7ebe6b44147a15357ae41e2146d8f311d2b6e66657590231,PodSandboxId:799a0567b6a81f6d2df124907d71bc1b13a8ff4652b14ba39775f7b1aa0dfcaa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766351140156438675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-471447,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce57c2667109ff3779565d235787c36,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:756deeeab4bb7a3d6b9eef2125df3b7e54bb881ac34b916250873bad24af3d8a,PodSandboxId:56918331a3709800b1a3cd157c50936a8ad547d2a5bdfc14f71dd7d4e5029d2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766351136852567053,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1053229f473c15a0d5de19f4163b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6abb2a527255ca4a3945395fc1c553575631f1220fbbd717a3483a877bdea06,PodSandboxId:68d45ff2273c2f4ee29ad02fc7a91e53dd516be05cd3cd7d4e5cb0e287409515,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Sta
te:CONTAINER_RUNNING,CreatedAt:1766351115220273113,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76nfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:523955d2d2268a2b609203c9380984ff164947649220d7efa18b665d6195dbcf,PodSandboxId:60749c7bcea3bf072bc7098ce7b85e484b59d718ea3b2a5cb24098f7975aa2c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17663
51116231558123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4fcrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a867251-b10f-44e7-ada2-057f1bb6273e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bafe84bead083c74c92a835751b6b8106dfba6e16ae4608d0a10f6810545a603,PodSandboxId:56918331a3709800b1a3cd157c50936a8ad547d2a5bdfc
14f71dd7d4e5029d2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766351115107462823,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1053229f473c15a0d5de19f4163b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93325ef162c038b4a11c0bda41762a5caaaad854c9249abeb436
4083c5410000,PodSandboxId:4cfdc2729aa10488f854e263f52bcfa12d34a337a2b88c0e8537d918b5a3aa58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766351115172522223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ceefc332265c98cad91e49ee4ea553,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d354cf813045d0f0a1ebe95d2de31a795d6393130b097c2e6677aff705ff6fb0,PodSandboxId:799a0567b6a81f6d2df124907d71bc1b13a8ff4652b14ba39775f7b1aa0dfcaa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766351114946873908,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce57c2667109ff3779565d235787c36,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f5cbc0350f6be6538794d454563d9d2e4dd4797d5a2ccec8d8bbc760f105b4,PodSandboxId:c24196a03cc2030e935a61b37742c373492eaad0288a23aef8a03b0c3074f573,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766351114834656864,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90561bae391763380a8abda94df97fd4,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74d0e9192c754ede5d957fc537828b6c07fb9165277516435bc22722ee06dd6,PodSandboxId:73296b379ad7cb48c183b8af7c79381bb07f02f40043251dee7614d396261b3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766351038358762042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4fcrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a867251-b10f-44e7-ada2-057f1bb6273e,},Annotations:map[string]string{io.kubernetes
.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b7e7ebf7d263f3f3630350b9e77caa2d7ef837752fcd7c507ef35d28b2baf82,PodSandboxId:e0deaffec26ef529ec7d9ee5a18a1bea113d79b9d0ee350707edc4e529961ccd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e4948
0fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766351037542998480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76nfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb747299-349a-4442-9f48-d90c7d4da632 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.709253882Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=02b37a4d-f645-43b3-b287-a605697e8417 name=/runtime.v1.RuntimeService/Version
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.709497102Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=02b37a4d-f645-43b3-b287-a605697e8417 name=/runtime.v1.RuntimeService/Version
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.711146854Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d5ae307e-fdbc-4292-ad23-40bf4f5dbe0a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.711836323Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1766351159711801814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:128011,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d5ae307e-fdbc-4292-ad23-40bf4f5dbe0a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.715958631Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=832b970d-d827-422c-82b4-a269c5ca6457 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.716329246Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=832b970d-d827-422c-82b4-a269c5ca6457 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.716795809Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6309a5049c66ec0af407317a4d7d6fe72a27a5fac495877a8d5c3c390bdd9aba,PodSandboxId:4cfdc2729aa10488f854e263f52bcfa12d34a337a2b88c0e8537d918b5a3aa58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766351140165246862,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ceefc332265c98cad91e49ee4ea553,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68d794a227631af73b57a6be3b7b3d1ecad90e9b5855855f7d4ce6432c3149a6,PodSandboxId:c24196a03cc2030e935a61b37742c373492eaad0288a23aef8a03b0c3074f573,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766351140168986935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90561bae391763380a8abda94df97fd4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fa30cec5ae5968b7ebe6b44147a15357ae41e2146d8f311d2b6e66657590231,PodSandboxId:799a0567b6a81f6d2df124907d71bc1b13a8ff4652b14ba39775f7b1aa0dfcaa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766351140156438675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-471447,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce57c2667109ff3779565d235787c36,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:756deeeab4bb7a3d6b9eef2125df3b7e54bb881ac34b916250873bad24af3d8a,PodSandboxId:56918331a3709800b1a3cd157c50936a8ad547d2a5bdfc14f71dd7d4e5029d2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766351136852567053,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1053229f473c15a0d5de19f4163b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6abb2a527255ca4a3945395fc1c553575631f1220fbbd717a3483a877bdea06,PodSandboxId:68d45ff2273c2f4ee29ad02fc7a91e53dd516be05cd3cd7d4e5cb0e287409515,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Sta
te:CONTAINER_RUNNING,CreatedAt:1766351115220273113,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76nfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:523955d2d2268a2b609203c9380984ff164947649220d7efa18b665d6195dbcf,PodSandboxId:60749c7bcea3bf072bc7098ce7b85e484b59d718ea3b2a5cb24098f7975aa2c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17663
51116231558123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4fcrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a867251-b10f-44e7-ada2-057f1bb6273e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bafe84bead083c74c92a835751b6b8106dfba6e16ae4608d0a10f6810545a603,PodSandboxId:56918331a3709800b1a3cd157c50936a8ad547d2a5bdfc
14f71dd7d4e5029d2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1766351115107462823,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1053229f473c15a0d5de19f4163b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93325ef162c038b4a11c0bda41762a5caaaad854c9249abeb436
4083c5410000,PodSandboxId:4cfdc2729aa10488f854e263f52bcfa12d34a337a2b88c0e8537d918b5a3aa58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1766351115172522223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ceefc332265c98cad91e49ee4ea553,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d354cf813045d0f0a1ebe95d2de31a795d6393130b097c2e6677aff705ff6fb0,PodSandboxId:799a0567b6a81f6d2df124907d71bc1b13a8ff4652b14ba39775f7b1aa0dfcaa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1766351114946873908,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce57c2667109ff3779565d235787c36,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f5cbc0350f6be6538794d454563d9d2e4dd4797d5a2ccec8d8bbc760f105b4,PodSandboxId:c24196a03cc2030e935a61b37742c373492eaad0288a23aef8a03b0c3074f573,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1766351114834656864,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90561bae391763380a8abda94df97fd4,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74d0e9192c754ede5d957fc537828b6c07fb9165277516435bc22722ee06dd6,PodSandboxId:73296b379ad7cb48c183b8af7c79381bb07f02f40043251dee7614d396261b3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1766351038358762042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4fcrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a867251-b10f-44e7-ada2-057f1bb6273e,},Annotations:map[string]string{io.kubernetes
.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b7e7ebf7d263f3f3630350b9e77caa2d7ef837752fcd7c507ef35d28b2baf82,PodSandboxId:e0deaffec26ef529ec7d9ee5a18a1bea113d79b9d0ee350707edc4e529961ccd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e4948
0fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1766351037542998480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76nfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=832b970d-d827-422c-82b4-a269c5ca6457 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.730440711Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e59c067e-be0c-40da-8c64-ca7bea04ca68 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.731047256Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:60749c7bcea3bf072bc7098ce7b85e484b59d718ea3b2a5cb24098f7975aa2c0,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-4fcrq,Uid:9a867251-b10f-44e7-ada2-057f1bb6273e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1766351114776491016,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-4fcrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a867251-b10f-44e7-ada2-057f1bb6273e,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-21T21:03:57.283986475Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:56918331a3709800b1a3cd157c50936a8ad547d2a5bdfc14f71dd7d4e5029d2a,Metadata:&PodSandboxMetadata{Name:etcd-pause-471447,Uid:bc1053229f473c15a0d5de19f4163b2e,Namespace:kube-system,Attempt:1,
},State:SANDBOX_READY,CreatedAt:1766351114444219082,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1053229f473c15a0d5de19f4163b2e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.94.123:2379,kubernetes.io/config.hash: bc1053229f473c15a0d5de19f4163b2e,kubernetes.io/config.seen: 2025-12-21T21:03:51.739084636Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4cfdc2729aa10488f854e263f52bcfa12d34a337a2b88c0e8537d918b5a3aa58,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-471447,Uid:58ceefc332265c98cad91e49ee4ea553,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1766351114434950664,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-471447,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 58ceefc332265c98cad91e49ee4ea553,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 58ceefc332265c98cad91e49ee4ea553,kubernetes.io/config.seen: 2025-12-21T21:03:51.739087172Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:68d45ff2273c2f4ee29ad02fc7a91e53dd516be05cd3cd7d4e5cb0e287409515,Metadata:&PodSandboxMetadata{Name:kube-proxy-76nfp,Uid:b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1766351114405247264,Labels:map[string]string{controller-revision-hash: 55c7cb7b75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-76nfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-21T21:03:56.325408254Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:799a0567b6a81f6d2df124907d71bc1b1
3a8ff4652b14ba39775f7b1aa0dfcaa,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-471447,Uid:3ce57c2667109ff3779565d235787c36,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1766351114357930593,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce57c2667109ff3779565d235787c36,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3ce57c2667109ff3779565d235787c36,kubernetes.io/config.seen: 2025-12-21T21:03:51.738914652Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c24196a03cc2030e935a61b37742c373492eaad0288a23aef8a03b0c3074f573,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-471447,Uid:90561bae391763380a8abda94df97fd4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1766351114329188977,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.
kubernetes.pod.name: kube-apiserver-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90561bae391763380a8abda94df97fd4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.94.123:8443,kubernetes.io/config.hash: 90561bae391763380a8abda94df97fd4,kubernetes.io/config.seen: 2025-12-21T21:03:51.739086120Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e59c067e-be0c-40da-8c64-ca7bea04ca68 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.732942250Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c572ed1-d709-440f-a2dc-9aafb3ce7088 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.733128311Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c572ed1-d709-440f-a2dc-9aafb3ce7088 name=/runtime.v1.RuntimeService/ListContainers
	Dec 21 21:05:59 pause-471447 crio[2803]: time="2025-12-21 21:05:59.734021813Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6309a5049c66ec0af407317a4d7d6fe72a27a5fac495877a8d5c3c390bdd9aba,PodSandboxId:4cfdc2729aa10488f854e263f52bcfa12d34a337a2b88c0e8537d918b5a3aa58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1766351140165246862,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58ceefc332265c98cad91e49ee4ea553,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68d794a227631af73b57a6be3b7b3d1ecad90e9b5855855f7d4ce6432c3149a6,PodSandboxId:c24196a03cc2030e935a61b37742c373492eaad0288a23aef8a03b0c3074f573,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1766351140168986935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90561bae391763380a8abda94df97fd4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fa30cec5ae5968b7ebe6b44147a15357ae41e2146d8f311d2b6e66657590231,PodSandboxId:799a0567b6a81f6d2df124907d71bc1b13a8ff4652b14ba39775f7b1aa0dfcaa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1766351140156438675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-471447,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce57c2667109ff3779565d235787c36,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:756deeeab4bb7a3d6b9eef2125df3b7e54bb881ac34b916250873bad24af3d8a,PodSandboxId:56918331a3709800b1a3cd157c50936a8ad547d2a5bdfc14f71dd7d4e5029d2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1766351136852567053,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-471447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1053229f473c15a0d5de19f4163b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6abb2a527255ca4a3945395fc1c553575631f1220fbbd717a3483a877bdea06,PodSandboxId:68d45ff2273c2f4ee29ad02fc7a91e53dd516be05cd3cd7d4e5cb0e287409515,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Sta
te:CONTAINER_RUNNING,CreatedAt:1766351115220273113,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76nfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:523955d2d2268a2b609203c9380984ff164947649220d7efa18b665d6195dbcf,PodSandboxId:60749c7bcea3bf072bc7098ce7b85e484b59d718ea3b2a5cb24098f7975aa2c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17663
51116231558123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4fcrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a867251-b10f-44e7-ada2-057f1bb6273e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c572ed1-d709-440f-a2dc-9aafb3ce7088 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	68d794a227631       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c   19 seconds ago      Running             kube-apiserver            2                   c24196a03cc20       kube-apiserver-pause-471447            kube-system
	6309a5049c66e       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942   19 seconds ago      Running             kube-controller-manager   2                   4cfdc2729aa10       kube-controller-manager-pause-471447   kube-system
	4fa30cec5ae59       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78   19 seconds ago      Running             kube-scheduler            2                   799a0567b6a81       kube-scheduler-pause-471447            kube-system
	756deeeab4bb7       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   22 seconds ago      Running             etcd                      2                   56918331a3709       etcd-pause-471447                      kube-system
	523955d2d2268       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   43 seconds ago      Running             coredns                   1                   60749c7bcea3b       coredns-66bc5c9577-4fcrq               kube-system
	f6abb2a527255       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691   44 seconds ago      Running             kube-proxy                1                   68d45ff2273c2       kube-proxy-76nfp                       kube-system
	93325ef162c03       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942   44 seconds ago      Exited              kube-controller-manager   1                   4cfdc2729aa10       kube-controller-manager-pause-471447   kube-system
	bafe84bead083       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   44 seconds ago      Exited              etcd                      1                   56918331a3709       etcd-pause-471447                      kube-system
	d354cf813045d       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78   44 seconds ago      Exited              kube-scheduler            1                   799a0567b6a81       kube-scheduler-pause-471447            kube-system
	54f5cbc0350f6       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c   44 seconds ago      Exited              kube-apiserver            1                   c24196a03cc20       kube-apiserver-pause-471447            kube-system
	f74d0e9192c75       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   2 minutes ago       Exited              coredns                   0                   73296b379ad7c       coredns-66bc5c9577-4fcrq               kube-system
	1b7e7ebf7d263       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691   2 minutes ago       Exited              kube-proxy                0                   e0deaffec26ef       kube-proxy-76nfp                       kube-system
	
	
	==> coredns [523955d2d2268a2b609203c9380984ff164947649220d7efa18b665d6195dbcf] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48830 - 42101 "HINFO IN 1270418721480851838.2093793463464312729. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018515858s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:39092->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:39086->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:39100->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [f74d0e9192c754ede5d957fc537828b6c07fb9165277516435bc22722ee06dd6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	[INFO] Reloading complete
	[INFO] 127.0.0.1:40164 - 51601 "HINFO IN 7054236044647090558.315467932222168931. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.014999555s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-471447
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-471447
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=pause-471447
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T21_03_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 21:03:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-471447
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 21:05:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 21:05:43 +0000   Sun, 21 Dec 2025 21:03:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 21:05:43 +0000   Sun, 21 Dec 2025 21:03:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 21:05:43 +0000   Sun, 21 Dec 2025 21:03:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Dec 2025 21:05:43 +0000   Sun, 21 Dec 2025 21:03:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.123
	  Hostname:    pause-471447
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 8c6a8e44b9524a34ab568486e0b3afb8
	  System UUID:                8c6a8e44-b952-4a34-ab56-8486e0b3afb8
	  Boot ID:                    0b24680a-4dd3-4e1f-a7cc-21ab4492f382
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-4fcrq                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     2m3s
	  kube-system                 etcd-pause-471447                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m10s
	  kube-system                 kube-apiserver-pause-471447             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-controller-manager-pause-471447    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-proxy-76nfp                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-scheduler-pause-471447             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m1s               kube-proxy       
	  Normal  Starting                 14s                kube-proxy       
	  Normal  NodeHasSufficientPID     2m9s               kubelet          Node pause-471447 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m9s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m9s               kubelet          Node pause-471447 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s               kubelet          Node pause-471447 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m9s               kubelet          Starting kubelet.
	  Normal  NodeReady                2m8s               kubelet          Node pause-471447 status is now: NodeReady
	  Normal  RegisteredNode           2m4s               node-controller  Node pause-471447 event: Registered Node pause-471447 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node pause-471447 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node pause-471447 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node pause-471447 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14s                node-controller  Node pause-471447 event: Registered Node pause-471447 in Controller
	
	
	==> dmesg <==
	[Dec21 21:03] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001727] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.007430] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.739777] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.100459] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.122960] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.138611] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.489784] kauditd_printk_skb: 18 callbacks suppressed
	[Dec21 21:04] kauditd_printk_skb: 219 callbacks suppressed
	[ +24.515425] kauditd_printk_skb: 38 callbacks suppressed
	[Dec21 21:05] kauditd_printk_skb: 319 callbacks suppressed
	[  +0.322814] kauditd_printk_skb: 77 callbacks suppressed
	[  +6.692015] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [756deeeab4bb7a3d6b9eef2125df3b7e54bb881ac34b916250873bad24af3d8a] <==
	{"level":"warn","ts":"2025-12-21T21:05:42.026702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.051373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.085116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.094915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.116697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.130344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.148373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.151551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.167742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.178358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.190806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.208945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.223449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.237142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.247180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.263143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.276450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.293390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.309100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.327709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.338428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.366611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.381543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.405705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T21:05:42.507475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42576","server-name":"","error":"EOF"}
	
	
	==> etcd [bafe84bead083c74c92a835751b6b8106dfba6e16ae4608d0a10f6810545a603] <==
	{"level":"warn","ts":"2025-12-21T21:05:16.726026Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-12-21T21:05:16.739507Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-21T21:05:16.742877Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.123:2379"}
	{"level":"info","ts":"2025-12-21T21:05:16.744443Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-21T21:05:16.744511Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-471447","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.94.123:2380"],"advertise-client-urls":["https://192.168.94.123:2379"]}
	{"level":"info","ts":"2025-12-21T21:05:16.745712Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	2025/12/21 21:05:16 WARNING: [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: EOF"
	{"level":"error","ts":"2025-12-21T21:05:16.748791Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-21T21:05:16.748833Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-21T21:05:16.748847Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"eeb128165590df22","current-leader-member-id":"eeb128165590df22"}
	{"level":"info","ts":"2025-12-21T21:05:16.748898Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-21T21:05:16.748918Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	2025/12/21 21:05:16 WARNING: [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:47464->127.0.0.1:2379: read: connection reset by peer"
	{"level":"warn","ts":"2025-12-21T21:05:16.754750Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.94.123:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-21T21:05:16.754802Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.94.123:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-21T21:05:16.754861Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.94.123:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-21T21:05:16.781725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47470","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:47470: use of closed network connection"}
	{"level":"error","ts":"2025-12-21T21:05:16.786914Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-12-21T21:05:16.789873Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-21T21:05:16.789934Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-21T21:05:16.789947Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-21T21:05:16.863574Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.94.123:2380"}
	{"level":"error","ts":"2025-12-21T21:05:16.866517Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.94.123:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-21T21:05:16.866576Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.94.123:2380"}
	{"level":"info","ts":"2025-12-21T21:05:16.866585Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-471447","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.94.123:2380"],"advertise-client-urls":["https://192.168.94.123:2379"]}
	
	
	==> kernel <==
	 21:06:00 up 2 min,  0 users,  load average: 0.64, 0.35, 0.14
	Linux pause-471447 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Dec 20 21:36:01 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [54f5cbc0350f6be6538794d454563d9d2e4dd4797d5a2ccec8d8bbc760f105b4] <==
	W1221 21:05:17.015693       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:17.015782       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1221 21:05:17.015907       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1221 21:05:17.038662       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1221 21:05:17.044361       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1221 21:05:17.044461       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1221 21:05:17.044835       1 instance.go:239] Using reconciler: lease
	W1221 21:05:17.046190       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1221 21:05:17.047382       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:18.016721       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:18.016725       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:18.048671       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:19.350806       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:19.638643       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:19.929940       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:21.734787       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:21.990843       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:22.100754       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:25.681730       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:26.048935       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:26.090795       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:31.172254       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:33.337426       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1221 21:05:33.718475       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1221 21:05:37.046838       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [68d794a227631af73b57a6be3b7b3d1ecad90e9b5855855f7d4ce6432c3149a6] <==
	I1221 21:05:43.433272       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1221 21:05:43.436952       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1221 21:05:43.437947       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1221 21:05:43.438063       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1221 21:05:43.438098       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1221 21:05:43.438483       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1221 21:05:43.440814       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1221 21:05:43.440858       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1221 21:05:43.440876       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1221 21:05:43.441124       1 aggregator.go:171] initial CRD sync complete...
	I1221 21:05:43.441134       1 autoregister_controller.go:144] Starting autoregister controller
	I1221 21:05:43.441140       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1221 21:05:43.441149       1 cache.go:39] Caches are synced for autoregister controller
	E1221 21:05:43.450193       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1221 21:05:43.471026       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1221 21:05:43.480498       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1221 21:05:43.762357       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1221 21:05:44.236022       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1221 21:05:44.921193       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1221 21:05:44.997493       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1221 21:05:45.037216       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 21:05:45.045100       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1221 21:05:46.940226       1 controller.go:667] quota admission added evaluator for: endpoints
	I1221 21:05:47.039238       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 21:05:53.743984       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [6309a5049c66ec0af407317a4d7d6fe72a27a5fac495877a8d5c3c390bdd9aba] <==
	I1221 21:05:46.812196       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1221 21:05:46.814393       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1221 21:05:46.818102       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1221 21:05:46.820470       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1221 21:05:46.822790       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1221 21:05:46.826007       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1221 21:05:46.828374       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1221 21:05:46.829550       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1221 21:05:46.832102       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1221 21:05:46.833758       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1221 21:05:46.833850       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1221 21:05:46.833857       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1221 21:05:46.833864       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1221 21:05:46.834183       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1221 21:05:46.834493       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1221 21:05:46.834765       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1221 21:05:46.836903       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1221 21:05:46.837038       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1221 21:05:46.837105       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-471447"
	I1221 21:05:46.837155       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1221 21:05:46.839897       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1221 21:05:46.841647       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1221 21:05:46.844320       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1221 21:05:46.847774       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1221 21:05:46.847967       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [93325ef162c038b4a11c0bda41762a5caaaad854c9249abeb4364083c5410000] <==
	
	
	==> kube-proxy [1b7e7ebf7d263f3f3630350b9e77caa2d7ef837752fcd7c507ef35d28b2baf82] <==
	I1221 21:03:58.208567       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1221 21:03:58.310607       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1221 21:03:58.310655       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.123"]
	E1221 21:03:58.310731       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 21:03:58.473395       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1221 21:03:58.473569       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1221 21:03:58.473629       1 server_linux.go:132] "Using iptables Proxier"
	I1221 21:03:58.498235       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 21:03:58.503090       1 server.go:527] "Version info" version="v1.34.3"
	I1221 21:03:58.504364       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 21:03:58.524217       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 21:03:58.524370       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 21:03:58.524815       1 config.go:200] "Starting service config controller"
	I1221 21:03:58.524829       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 21:03:58.524926       1 config.go:106] "Starting endpoint slice config controller"
	I1221 21:03:58.524936       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 21:03:58.524958       1 config.go:309] "Starting node config controller"
	I1221 21:03:58.524970       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 21:03:58.627554       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1221 21:03:58.628377       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1221 21:03:58.628411       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1221 21:03:58.628429       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [f6abb2a527255ca4a3945395fc1c553575631f1220fbbd717a3483a877bdea06] <==
	E1221 21:05:40.249824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-471447&limit=500&resourceVersion=0\": dial tcp 192.168.94.123:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1221 21:05:45.245706       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1221 21:05:45.245759       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.123"]
	E1221 21:05:45.245841       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 21:05:45.293031       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1221 21:05:45.293089       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1221 21:05:45.293122       1 server_linux.go:132] "Using iptables Proxier"
	I1221 21:05:45.303470       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 21:05:45.304433       1 server.go:527] "Version info" version="v1.34.3"
	I1221 21:05:45.304461       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 21:05:45.318795       1 config.go:200] "Starting service config controller"
	I1221 21:05:45.318830       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 21:05:45.318847       1 config.go:106] "Starting endpoint slice config controller"
	I1221 21:05:45.318850       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 21:05:45.318866       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 21:05:45.318869       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 21:05:45.321793       1 config.go:309] "Starting node config controller"
	I1221 21:05:45.321825       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 21:05:45.321832       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1221 21:05:45.418995       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1221 21:05:45.419173       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1221 21:05:45.419186       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4fa30cec5ae5968b7ebe6b44147a15357ae41e2146d8f311d2b6e66657590231] <==
	I1221 21:05:42.174177       1 serving.go:386] Generated self-signed cert in-memory
	I1221 21:05:43.414929       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1221 21:05:43.414968       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 21:05:43.424487       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1221 21:05:43.424531       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1221 21:05:43.424567       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 21:05:43.424573       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 21:05:43.424592       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1221 21:05:43.424614       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1221 21:05:43.424827       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1221 21:05:43.424880       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1221 21:05:43.525576       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1221 21:05:43.525632       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1221 21:05:43.525717       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [d354cf813045d0f0a1ebe95d2de31a795d6393130b097c2e6677aff705ff6fb0] <==
	I1221 21:05:17.456204       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Dec 21 21:05:41 pause-471447 kubelet[3884]: E1221 21:05:41.839611    3884 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-471447\" not found" node="pause-471447"
	Dec 21 21:05:41 pause-471447 kubelet[3884]: E1221 21:05:41.840931    3884 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-471447\" not found" node="pause-471447"
	Dec 21 21:05:41 pause-471447 kubelet[3884]: E1221 21:05:41.841389    3884 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-471447\" not found" node="pause-471447"
	Dec 21 21:05:41 pause-471447 kubelet[3884]: E1221 21:05:41.841527    3884 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-471447\" not found" node="pause-471447"
	Dec 21 21:05:42 pause-471447 kubelet[3884]: E1221 21:05:42.842130    3884 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-471447\" not found" node="pause-471447"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: I1221 21:05:43.376540    3884 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-471447"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: I1221 21:05:43.494095    3884 kubelet_node_status.go:124] "Node was previously registered" node="pause-471447"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: I1221 21:05:43.494211    3884 kubelet_node_status.go:78] "Successfully registered node" node="pause-471447"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: I1221 21:05:43.494236    3884 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: I1221 21:05:43.496197    3884 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: E1221 21:05:43.516082    3884 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-pause-471447\" already exists" pod="kube-system/etcd-pause-471447"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: I1221 21:05:43.516109    3884 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-471447"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: E1221 21:05:43.524736    3884 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-471447\" already exists" pod="kube-system/kube-apiserver-pause-471447"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: I1221 21:05:43.524767    3884 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-471447"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: E1221 21:05:43.534825    3884 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-471447\" already exists" pod="kube-system/kube-controller-manager-pause-471447"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: I1221 21:05:43.534849    3884 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-471447"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: E1221 21:05:43.544017    3884 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-471447\" already exists" pod="kube-system/kube-scheduler-pause-471447"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: I1221 21:05:43.651547    3884 apiserver.go:52] "Watching apiserver"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: I1221 21:05:43.675733    3884 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: I1221 21:05:43.748831    3884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6-xtables-lock\") pod \"kube-proxy-76nfp\" (UID: \"b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6\") " pod="kube-system/kube-proxy-76nfp"
	Dec 21 21:05:43 pause-471447 kubelet[3884]: I1221 21:05:43.748903    3884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6-lib-modules\") pod \"kube-proxy-76nfp\" (UID: \"b1b9dc59-2f1a-4bf6-9bcc-4d656e508ac6\") " pod="kube-system/kube-proxy-76nfp"
	Dec 21 21:05:49 pause-471447 kubelet[3884]: E1221 21:05:49.815362    3884 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766351149813816324  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:128011}  inodes_used:{value:57}}"
	Dec 21 21:05:49 pause-471447 kubelet[3884]: E1221 21:05:49.815427    3884 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766351149813816324  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:128011}  inodes_used:{value:57}}"
	Dec 21 21:05:59 pause-471447 kubelet[3884]: E1221 21:05:59.817489    3884 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1766351159816799930  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:128011}  inodes_used:{value:57}}"
	Dec 21 21:05:59 pause-471447 kubelet[3884]: E1221 21:05:59.817985    3884 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1766351159816799930  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:128011}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-471447 -n pause-471447
helpers_test.go:270: (dbg) Run:  kubectl --context pause-471447 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (85.74s)

                                                
                                    

Test pass (368/435)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.17
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.3/json-events 2.76
13 TestDownloadOnly/v1.34.3/preload-exists 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.07
18 TestDownloadOnly/v1.34.3/DeleteAll 0.15
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-rc.1/json-events 3.26
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.15
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.64
31 TestOffline 108.26
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 126.27
40 TestAddons/serial/GCPAuth/Namespaces 0.16
41 TestAddons/serial/GCPAuth/FakeCredentials 9.54
44 TestAddons/parallel/Registry 17.14
45 TestAddons/parallel/RegistryCreds 0.66
47 TestAddons/parallel/InspektorGadget 12
48 TestAddons/parallel/MetricsServer 6.87
50 TestAddons/parallel/CSI 56.7
51 TestAddons/parallel/Headlamp 16.99
52 TestAddons/parallel/CloudSpanner 5.6
53 TestAddons/parallel/LocalPath 53.78
54 TestAddons/parallel/NvidiaDevicePlugin 5.8
55 TestAddons/parallel/Yakd 11.24
57 TestAddons/StoppedEnableDisable 74.13
58 TestCertOptions 39.07
61 TestForceSystemdFlag 61.89
62 TestForceSystemdEnv 62.88
67 TestErrorSpam/setup 35.26
68 TestErrorSpam/start 0.35
69 TestErrorSpam/status 0.64
70 TestErrorSpam/pause 1.51
71 TestErrorSpam/unpause 1.68
72 TestErrorSpam/stop 5.71
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 80.25
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 36.71
79 TestFunctional/serial/KubeContext 0.04
80 TestFunctional/serial/KubectlGetPods 0.07
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.04
84 TestFunctional/serial/CacheCmd/cache/add_local 1.08
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 37.61
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.33
95 TestFunctional/serial/LogsFileCmd 1.35
96 TestFunctional/serial/InvalidService 4.32
98 TestFunctional/parallel/ConfigCmd 0.43
100 TestFunctional/parallel/DryRun 0.22
101 TestFunctional/parallel/InternationalLanguage 0.11
102 TestFunctional/parallel/StatusCmd 0.67
106 TestFunctional/parallel/ServiceCmdConnect 38.43
107 TestFunctional/parallel/AddonsCmd 0.18
108 TestFunctional/parallel/PersistentVolumeClaim 81.42
110 TestFunctional/parallel/SSHCmd 0.37
111 TestFunctional/parallel/CpCmd 1.17
112 TestFunctional/parallel/MySQL 111.57
113 TestFunctional/parallel/FileSync 0.15
114 TestFunctional/parallel/CertSync 0.94
118 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.39
122 TestFunctional/parallel/License 0.23
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
134 TestFunctional/parallel/ProfileCmd/profile_list 0.29
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
136 TestFunctional/parallel/MountCmd/any-port 65.88
137 TestFunctional/parallel/MountCmd/specific-port 1.3
138 TestFunctional/parallel/MountCmd/VerifyCleanup 0.98
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.18
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.18
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.18
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.18
143 TestFunctional/parallel/ImageCommands/ImageBuild 2.75
144 TestFunctional/parallel/ImageCommands/Setup 0.39
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.17
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.82
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 0.97
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.72
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
152 TestFunctional/parallel/Version/short 0.06
153 TestFunctional/parallel/Version/components 0.43
154 TestFunctional/parallel/UpdateContextCmd/no_changes 0.07
155 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
156 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
157 TestFunctional/parallel/ServiceCmd/List 1.2
158 TestFunctional/parallel/ServiceCmd/JSONOutput 1.19
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 75.33
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 26.76
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.05
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 0.08
176 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 3.05
177 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 1.01
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.06
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.06
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.19
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.53
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.13
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 0.13
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 32.95
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 0.07
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 1.32
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 1.33
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 4.58
191 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.46
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.23
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.12
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 0.67
200 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.16
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 106.48
203 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.38
204 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 1.3
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 28.55
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.23
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 1.23
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 0.06
213 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.43
215 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 0.26
216 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.07
217 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.07
218 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.07
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.06
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.42
231 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.19
232 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.19
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.2
234 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.19
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 2.68
236 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 0.15
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 1.29
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 0.84
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 0.98
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.5
241 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.5
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 0.77
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 0.54
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.32
245 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.3
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.31
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 63.1
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 1.29
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 1.31
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 1.21
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 1.23
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.04
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.02
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.02
261 TestMultiControlPlane/serial/StartCluster 220.01
262 TestMultiControlPlane/serial/DeployApp 6.64
263 TestMultiControlPlane/serial/PingHostFromPods 1.37
264 TestMultiControlPlane/serial/AddWorkerNode 44.39
265 TestMultiControlPlane/serial/NodeLabels 0.07
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.71
267 TestMultiControlPlane/serial/CopyFile 11.1
268 TestMultiControlPlane/serial/StopSecondaryNode 87.78
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.51
270 TestMultiControlPlane/serial/RestartSecondaryNode 31.42
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.86
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 357.71
273 TestMultiControlPlane/serial/DeleteSecondaryNode 18.36
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.51
275 TestMultiControlPlane/serial/StopCluster 256.12
276 TestMultiControlPlane/serial/RestartCluster 98.59
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.5
278 TestMultiControlPlane/serial/AddSecondaryNode 72.36
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.67
284 TestJSONOutput/start/Command 78.46
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.71
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.62
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 7.16
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.23
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 76.1
316 TestMountStart/serial/StartWithMountFirst 20.1
317 TestMountStart/serial/VerifyMountFirst 0.31
318 TestMountStart/serial/StartWithMountSecond 19.49
319 TestMountStart/serial/VerifyMountSecond 0.3
320 TestMountStart/serial/DeleteFirst 0.7
321 TestMountStart/serial/VerifyMountPostDelete 0.31
322 TestMountStart/serial/Stop 1.26
323 TestMountStart/serial/RestartStopped 17.66
324 TestMountStart/serial/VerifyMountPostStop 0.31
327 TestMultiNode/serial/FreshStart2Nodes 95.83
328 TestMultiNode/serial/DeployApp2Nodes 5.22
329 TestMultiNode/serial/PingHostFrom2Pods 0.86
330 TestMultiNode/serial/AddNode 39.38
331 TestMultiNode/serial/MultiNodeLabels 0.06
332 TestMultiNode/serial/ProfileList 0.45
333 TestMultiNode/serial/CopyFile 6.01
334 TestMultiNode/serial/StopNode 2.17
335 TestMultiNode/serial/StartAfterStop 38.22
336 TestMultiNode/serial/RestartKeepsNodes 299.84
337 TestMultiNode/serial/DeleteNode 2.56
338 TestMultiNode/serial/StopMultiNode 152.84
339 TestMultiNode/serial/RestartMultiNode 85.23
340 TestMultiNode/serial/ValidateNameConflict 39.93
347 TestScheduledStopUnix 108.35
351 TestRunningBinaryUpgrade 401.63
353 TestKubernetesUpgrade 177.8
356 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
357 TestNoKubernetes/serial/StartWithK8s 84.28
358 TestNoKubernetes/serial/StartWithStopK8s 25.8
359 TestStoppedBinaryUpgrade/Setup 0.53
360 TestStoppedBinaryUpgrade/Upgrade 87.43
361 TestNoKubernetes/serial/Start 50.15
362 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
363 TestNoKubernetes/serial/VerifyK8sNotRunning 0.16
364 TestNoKubernetes/serial/ProfileList 0.91
365 TestNoKubernetes/serial/Stop 1.25
366 TestNoKubernetes/serial/StartNoArgs 35.21
367 TestPreload/Start-NoPreload-PullImage 126.15
368 TestStoppedBinaryUpgrade/MinikubeLogs 1.2
369 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
384 TestNetworkPlugins/group/false 3.63
388 TestISOImage/Setup 50.18
390 TestISOImage/Binaries/crictl 0.18
391 TestISOImage/Binaries/curl 0.16
392 TestISOImage/Binaries/docker 0.18
393 TestISOImage/Binaries/git 0.17
394 TestISOImage/Binaries/iptables 0.17
395 TestISOImage/Binaries/podman 0.17
396 TestISOImage/Binaries/rsync 0.17
397 TestISOImage/Binaries/socat 0.17
398 TestISOImage/Binaries/wget 0.17
399 TestISOImage/Binaries/VBoxControl 0.17
400 TestISOImage/Binaries/VBoxService 0.17
403 TestPause/serial/Start 80.41
406 TestStartStop/group/old-k8s-version/serial/FirstStart 91.74
408 TestStartStop/group/no-preload/serial/FirstStart 105.11
410 TestStartStop/group/old-k8s-version/serial/DeployApp 10.37
412 TestStartStop/group/embed-certs/serial/FirstStart 85.73
413 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.63
414 TestStartStop/group/old-k8s-version/serial/Stop 83.11
415 TestStartStop/group/no-preload/serial/DeployApp 9.34
416 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.02
417 TestStartStop/group/no-preload/serial/Stop 90.3
418 TestStartStop/group/embed-certs/serial/DeployApp 10.29
419 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
420 TestStartStop/group/old-k8s-version/serial/SecondStart 45.45
421 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.96
422 TestStartStop/group/embed-certs/serial/Stop 71.05
423 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
424 TestStartStop/group/no-preload/serial/SecondStart 53.3
425 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 14.01
426 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
427 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
428 TestStartStop/group/old-k8s-version/serial/Pause 2.92
430 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 82.93
431 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
432 TestStartStop/group/embed-certs/serial/SecondStart 55.17
433 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
434 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
435 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
436 TestStartStop/group/no-preload/serial/Pause 3.21
438 TestStartStop/group/newest-cni/serial/FirstStart 54.79
439 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.01
440 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
441 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
442 TestStartStop/group/embed-certs/serial/Pause 2.93
443 TestStartStop/group/newest-cni/serial/DeployApp 0
444 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.19
445 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.35
446 TestStartStop/group/newest-cni/serial/Stop 8.04
447 TestNetworkPlugins/group/auto/Start 82.29
448 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.15
449 TestStartStop/group/newest-cni/serial/SecondStart 48.91
450 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.97
451 TestStartStop/group/default-k8s-diff-port/serial/Stop 84.68
452 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
453 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
454 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
455 TestStartStop/group/newest-cni/serial/Pause 3.1
456 TestNetworkPlugins/group/kindnet/Start 63.98
457 TestNetworkPlugins/group/auto/KubeletFlags 0.18
458 TestNetworkPlugins/group/auto/NetCatPod 11.27
459 TestNetworkPlugins/group/auto/DNS 0.17
460 TestNetworkPlugins/group/auto/Localhost 0.16
461 TestNetworkPlugins/group/auto/HairPin 0.14
462 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
463 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.64
464 TestNetworkPlugins/group/calico/Start 76.64
465 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
466 TestNetworkPlugins/group/kindnet/KubeletFlags 0.17
467 TestNetworkPlugins/group/kindnet/NetCatPod 10.22
468 TestNetworkPlugins/group/kindnet/DNS 0.15
469 TestNetworkPlugins/group/kindnet/Localhost 0.14
470 TestNetworkPlugins/group/kindnet/HairPin 0.14
471 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 11.01
472 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
473 TestNetworkPlugins/group/custom-flannel/Start 73.19
474 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
475 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.97
476 TestNetworkPlugins/group/enable-default-cni/Start 95.01
477 TestNetworkPlugins/group/calico/ControllerPod 6.01
478 TestNetworkPlugins/group/calico/KubeletFlags 0.22
479 TestNetworkPlugins/group/calico/NetCatPod 10.29
480 TestNetworkPlugins/group/calico/DNS 0.23
481 TestNetworkPlugins/group/calico/Localhost 0.19
482 TestNetworkPlugins/group/calico/HairPin 0.16
483 TestNetworkPlugins/group/flannel/Start 67.42
484 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
485 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.25
486 TestNetworkPlugins/group/custom-flannel/DNS 0.15
487 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
488 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
489 TestNetworkPlugins/group/bridge/Start 85.51
490 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.19
491 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.25
492 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
493 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
494 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
495 TestNetworkPlugins/group/flannel/ControllerPod 6.01
496 TestPreload/PreloadSrc/gcs 4.08
497 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
498 TestNetworkPlugins/group/flannel/NetCatPod 10.28
499 TestPreload/PreloadSrc/github 5.21
500 TestPreload/PreloadSrc/gcs-cached 0.6
502 TestISOImage/PersistentMounts//data 0.17
503 TestISOImage/PersistentMounts//var/lib/docker 0.17
504 TestISOImage/PersistentMounts//var/lib/cni 0.17
505 TestISOImage/PersistentMounts//var/lib/kubelet 0.17
506 TestISOImage/PersistentMounts//var/lib/minikube 0.17
507 TestISOImage/PersistentMounts//var/lib/toolbox 0.17
508 TestISOImage/PersistentMounts//var/lib/boot2docker 0.17
509 TestISOImage/VersionJSON 0.17
510 TestISOImage/eBPFSupport 0.16
511 TestNetworkPlugins/group/flannel/DNS 0.17
512 TestNetworkPlugins/group/flannel/Localhost 0.13
513 TestNetworkPlugins/group/flannel/HairPin 0.14
514 TestNetworkPlugins/group/bridge/KubeletFlags 0.18
515 TestNetworkPlugins/group/bridge/NetCatPod 10.25
516 TestNetworkPlugins/group/bridge/DNS 0.14
517 TestNetworkPlugins/group/bridge/Localhost 0.12
518 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (7.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-240302 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-240302 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.167762027s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1221 19:46:18.371561  126345 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1221 19:46:18.371676  126345 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-122429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-240302
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-240302: exit status 85 (72.323398ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-240302 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-240302 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 19:46:11
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 19:46:11.257644  126357 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:46:11.257923  126357 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:46:11.257934  126357 out.go:374] Setting ErrFile to fd 2...
	I1221 19:46:11.257939  126357 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:46:11.258123  126357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
	W1221 19:46:11.258249  126357 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22179-122429/.minikube/config/config.json: open /home/jenkins/minikube-integration/22179-122429/.minikube/config/config.json: no such file or directory
	I1221 19:46:11.258766  126357 out.go:368] Setting JSON to true
	I1221 19:46:11.259704  126357 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":12521,"bootTime":1766333850,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 19:46:11.259764  126357 start.go:143] virtualization: kvm guest
	I1221 19:46:11.264859  126357 out.go:99] [download-only-240302] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 19:46:11.265002  126357 notify.go:221] Checking for updates...
	W1221 19:46:11.265026  126357 preload.go:369] Failed to list preload files: open /home/jenkins/minikube-integration/22179-122429/.minikube/cache/preloaded-tarball: no such file or directory
	I1221 19:46:11.267171  126357 out.go:171] MINIKUBE_LOCATION=22179
	I1221 19:46:11.268774  126357 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 19:46:11.269956  126357 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	I1221 19:46:11.271217  126357 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	I1221 19:46:11.272315  126357 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1221 19:46:11.274524  126357 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1221 19:46:11.274755  126357 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 19:46:11.308516  126357 out.go:99] Using the kvm2 driver based on user configuration
	I1221 19:46:11.308556  126357 start.go:309] selected driver: kvm2
	I1221 19:46:11.308564  126357 start.go:928] validating driver "kvm2" against <nil>
	I1221 19:46:11.308885  126357 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1221 19:46:11.309372  126357 start_flags.go:413] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1221 19:46:11.309566  126357 start_flags.go:977] Wait components to verify : map[apiserver:true system_pods:true]
	I1221 19:46:11.309600  126357 cni.go:84] Creating CNI manager for ""
	I1221 19:46:11.309655  126357 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1221 19:46:11.309665  126357 start_flags.go:338] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1221 19:46:11.309705  126357 start.go:353] cluster config:
	{Name:download-only-240302 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-240302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 19:46:11.309882  126357 iso.go:125] acquiring lock: {Name:mk32aed4917b82431a8f5160a35db6118385a2c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 19:46:11.311544  126357 out.go:99] Downloading VM boot image ...
	I1221 19:46:11.311590  126357 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22179-122429/.minikube/cache/iso/amd64/minikube-v1.37.0-1766254259-22261-amd64.iso
	I1221 19:46:14.697762  126357 out.go:99] Starting "download-only-240302" primary control-plane node in "download-only-240302" cluster
	I1221 19:46:14.697806  126357 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1221 19:46:14.713661  126357 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1221 19:46:14.713705  126357 cache.go:65] Caching tarball of preloaded images
	I1221 19:46:14.713900  126357 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1221 19:46:14.715621  126357 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1221 19:46:14.715647  126357 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1221 19:46:14.715657  126357 preload.go:333] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1221 19:46:14.736918  126357 preload.go:310] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1221 19:46:14.737028  126357 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22179-122429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-240302 host does not exist
	  To start a cluster, run: "minikube start -p download-only-240302"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-240302
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (2.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-979089 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-979089 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (2.75606765s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (2.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1221 19:46:21.492447  126345 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
I1221 19:46:21.492506  126345 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-122429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-979089
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-979089: exit status 85 (71.109478ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-240302 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-240302 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ delete  │ -p download-only-240302                                                                                                                                                 │ download-only-240302 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ start   │ -o=json --download-only -p download-only-979089 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-979089 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 19:46:18
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 19:46:18.788030  126552 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:46:18.788259  126552 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:46:18.788266  126552 out.go:374] Setting ErrFile to fd 2...
	I1221 19:46:18.788270  126552 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:46:18.788440  126552 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
	I1221 19:46:18.788910  126552 out.go:368] Setting JSON to true
	I1221 19:46:18.789699  126552 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":12529,"bootTime":1766333850,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 19:46:18.789758  126552 start.go:143] virtualization: kvm guest
	I1221 19:46:18.791702  126552 out.go:99] [download-only-979089] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 19:46:18.791870  126552 notify.go:221] Checking for updates...
	I1221 19:46:18.792989  126552 out.go:171] MINIKUBE_LOCATION=22179
	I1221 19:46:18.794400  126552 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 19:46:18.795552  126552 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	I1221 19:46:18.796711  126552 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	I1221 19:46:18.797801  126552 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-979089 host does not exist
	  To start a cluster, run: "minikube start -p download-only-979089"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-979089
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (3.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-836309 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-836309 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.255460059s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (3.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1221 19:46:25.123744  126345 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
I1221 19:46:25.123797  126345 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-122429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-836309
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-836309: exit status 85 (71.567022ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                     │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-240302 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio      │ download-only-240302 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ delete  │ -p download-only-240302                                                                                                                                                      │ download-only-240302 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ start   │ -o=json --download-only -p download-only-979089 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio      │ download-only-979089 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ delete  │ -p download-only-979089                                                                                                                                                      │ download-only-979089 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ start   │ -o=json --download-only -p download-only-836309 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-836309 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 19:46:21
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 19:46:21.919416  126718 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:46:21.919672  126718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:46:21.919681  126718 out.go:374] Setting ErrFile to fd 2...
	I1221 19:46:21.919686  126718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:46:21.919900  126718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
	I1221 19:46:21.920352  126718 out.go:368] Setting JSON to true
	I1221 19:46:21.921172  126718 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":12532,"bootTime":1766333850,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 19:46:21.921227  126718 start.go:143] virtualization: kvm guest
	I1221 19:46:21.923182  126718 out.go:99] [download-only-836309] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 19:46:21.923344  126718 notify.go:221] Checking for updates...
	I1221 19:46:21.924788  126718 out.go:171] MINIKUBE_LOCATION=22179
	I1221 19:46:21.926271  126718 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 19:46:21.927503  126718 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	I1221 19:46:21.932118  126718 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	I1221 19:46:21.933397  126718 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-836309 host does not exist
	  To start a cluster, run: "minikube start -p download-only-836309"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-836309
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
I1221 19:46:25.928745  126345 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-061430 --alsologtostderr --binary-mirror http://127.0.0.1:41125 --driver=kvm2  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-061430" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-061430
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (108.26s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-694743 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-694743 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m46.771872345s)
helpers_test.go:176: Cleaning up "offline-crio-694743" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-694743
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-694743: (1.483372515s)
--- PASS: TestOffline (108.26s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-659513
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-659513: exit status 85 (69.715007ms)

                                                
                                                
-- stdout --
	* Profile "addons-659513" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-659513"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-659513
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-659513: exit status 85 (68.889793ms)

                                                
                                                
-- stdout --
	* Profile "addons-659513" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-659513"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (126.27s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-659513 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-659513 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m6.268961566s)
--- PASS: TestAddons/Setup (126.27s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-659513 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-659513 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.54s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-659513 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-659513 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [7f347285-8b81-4c24-9b59-da519e7b35b0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [7f347285-8b81-4c24-9b59-da519e7b35b0] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004233019s
addons_test.go:696: (dbg) Run:  kubectl --context addons-659513 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-659513 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-659513 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.54s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 7.410866ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-dvnl4" [56216ff6-db76-45d5-945d-2bf21a023ebf] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.053803265s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-kntxd" [1893f6cf-53cb-4c2d-acea-6739ff305373] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.00470711s
addons_test.go:394: (dbg) Run:  kubectl --context addons-659513 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-659513 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-659513 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.117916624s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-659513 ip
2025/12/21 19:49:07 [DEBUG] GET http://192.168.39.164:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-659513 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.14s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 6.518595ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-659513
addons_test.go:334: (dbg) Run:  kubectl --context addons-659513 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-659513 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-nw7ns" [de1f4e4b-a471-4b5e-bfe0-04d34cf6404a] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.006784533s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-659513 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-659513 addons disable inspektor-gadget --alsologtostderr -v=1: (5.995781689s)
--- PASS: TestAddons/parallel/InspektorGadget (12.00s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.87s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 19.344014ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-v72tn" [68904163-d7f9-411e-9a48-c014af0cef06] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.006283842s
addons_test.go:465: (dbg) Run:  kubectl --context addons-659513 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-659513 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.87s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1221 19:49:02.655618  126345 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1221 19:49:02.671674  126345 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1221 19:49:02.671700  126345 kapi.go:107] duration metric: took 16.107657ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 16.11725ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-659513 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-659513 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [4d7713d0-19a7-4d99-90c0-adccd6188f5c] Pending
helpers_test.go:353: "task-pv-pod" [4d7713d0-19a7-4d99-90c0-adccd6188f5c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [4d7713d0-19a7-4d99-90c0-adccd6188f5c] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004700917s
addons_test.go:574: (dbg) Run:  kubectl --context addons-659513 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-659513 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-659513 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-659513 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-659513 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-659513 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-659513 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [dae614d2-2fb4-4140-bb0d-1c5f3834d677] Pending
helpers_test.go:353: "task-pv-pod-restore" [dae614d2-2fb4-4140-bb0d-1c5f3834d677] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [dae614d2-2fb4-4140-bb0d-1c5f3834d677] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004164376s
addons_test.go:616: (dbg) Run:  kubectl --context addons-659513 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-659513 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-659513 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-659513 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-659513 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-659513 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.920181818s)
--- PASS: TestAddons/parallel/CSI (56.70s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-659513 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-7f5bcd4678-rjhfw" [df4f22d6-4d6d-42a9-bfb3-ede5e7097489] Pending
helpers_test.go:353: "headlamp-7f5bcd4678-rjhfw" [df4f22d6-4d6d-42a9-bfb3-ede5e7097489] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-7f5bcd4678-rjhfw" [df4f22d6-4d6d-42a9-bfb3-ede5e7097489] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-7f5bcd4678-rjhfw" [df4f22d6-4d6d-42a9-bfb3-ede5e7097489] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.032981162s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-659513 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-659513 addons disable headlamp --alsologtostderr -v=1: (6.072444003s)
--- PASS: TestAddons/parallel/Headlamp (16.99s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-85df47b6f4-8m82d" [954a5f45-9256-4c25-81f8-53a2a4e8ef2e] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.048236802s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-659513 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.78s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-659513 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-659513 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-659513 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [8ddf5837-5baf-448e-8ffc-264ba9550ebf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [8ddf5837-5baf-448e-8ffc-264ba9550ebf] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [8ddf5837-5baf-448e-8ffc-264ba9550ebf] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004010528s
addons_test.go:969: (dbg) Run:  kubectl --context addons-659513 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-659513 ssh "cat /opt/local-path-provisioner/pvc-7cf3985a-8a2e-4729-b39d-80336e9e7676_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-659513 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-659513 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-659513 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-659513 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.968568957s)
--- PASS: TestAddons/parallel/LocalPath (53.78s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.8s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-ql2hl" [76700fd6-090f-485b-97c5-07cea983a62e] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.062135757s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-659513 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.80s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-pt428" [45518159-6583-4a02-824f-c8d9c099b195] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.062274654s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-659513 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-659513 addons disable yakd --alsologtostderr -v=1: (6.175321036s)
--- PASS: TestAddons/parallel/Yakd (11.24s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (74.13s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-659513
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-659513: (1m13.920757591s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-659513
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-659513
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-659513
--- PASS: TestAddons/StoppedEnableDisable (74.13s)

                                                
                                    
x
+
TestCertOptions (39.07s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-764127 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E1221 21:04:07.333629  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-764127 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (37.711647231s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-764127 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-764127 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-764127 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-764127" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-764127
--- PASS: TestCertOptions (39.07s)

                                                
                                    
x
+
TestForceSystemdFlag (61.89s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-048347 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-048347 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m0.830085742s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-048347 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-048347" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-048347
--- PASS: TestForceSystemdFlag (61.89s)

                                                
                                    
x
+
TestForceSystemdEnv (62.88s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-764266 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-764266 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m1.993155814s)
helpers_test.go:176: Cleaning up "force-systemd-env-764266" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-764266
--- PASS: TestForceSystemdEnv (62.88s)

                                                
                                    
x
+
TestErrorSpam/setup (35.26s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-439876 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-439876 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-439876 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-439876 --driver=kvm2  --container-runtime=crio: (35.256490573s)
--- PASS: TestErrorSpam/setup (35.26s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439876 --log_dir /tmp/nospam-439876 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439876 --log_dir /tmp/nospam-439876 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439876 --log_dir /tmp/nospam-439876 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439876 --log_dir /tmp/nospam-439876 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439876 --log_dir /tmp/nospam-439876 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439876 --log_dir /tmp/nospam-439876 status
--- PASS: TestErrorSpam/status (0.64s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439876 --log_dir /tmp/nospam-439876 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439876 --log_dir /tmp/nospam-439876 pause
E1221 19:53:33.673896  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:53:33.679235  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:53:33.689584  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:53:33.709923  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:53:33.750280  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:53:33.830707  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439876 --log_dir /tmp/nospam-439876 pause
E1221 19:53:33.991214  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:53:34.312434  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439876 --log_dir /tmp/nospam-439876 unpause
E1221 19:53:34.952999  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439876 --log_dir /tmp/nospam-439876 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439876 --log_dir /tmp/nospam-439876 unpause
--- PASS: TestErrorSpam/unpause (1.68s)

                                                
                                    
x
+
TestErrorSpam/stop (5.71s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439876 --log_dir /tmp/nospam-439876 stop
E1221 19:53:36.233914  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-439876 --log_dir /tmp/nospam-439876 stop: (1.951372041s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439876 --log_dir /tmp/nospam-439876 stop
E1221 19:53:38.794940  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-439876 --log_dir /tmp/nospam-439876 stop: (1.821258249s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-439876 --log_dir /tmp/nospam-439876 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-439876 --log_dir /tmp/nospam-439876 stop: (1.93657466s)
--- PASS: TestErrorSpam/stop (5.71s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22179-122429/.minikube/files/etc/test/nested/copy/126345/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.25s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-555265 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1221 19:53:43.916123  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:53:54.156698  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:54:14.637264  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:54:55.598659  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-555265 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m20.247194994s)
--- PASS: TestFunctional/serial/StartWithProxy (80.25s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.71s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1221 19:55:02.470882  126345 config.go:182] Loaded profile config "functional-555265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-555265 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-555265 --alsologtostderr -v=8: (36.710504742s)
functional_test.go:678: soft start took 36.711207937s for "functional-555265" cluster.
I1221 19:55:39.181745  126345 config.go:182] Loaded profile config "functional-555265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (36.71s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-555265 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-555265 cache add registry.k8s.io/pause:3.1: (1.006913057s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-555265 cache add registry.k8s.io/pause:3.3: (1.000345795s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-555265 cache add registry.k8s.io/pause:latest: (1.037050789s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-555265 /tmp/TestFunctionalserialCacheCmdcacheadd_local4270537656/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 cache add minikube-local-cache-test:functional-555265
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 cache delete minikube-local-cache-test:functional-555265
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-555265
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-555265 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (175.404193ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 kubectl -- --context functional-555265 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-555265 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.61s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-555265 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1221 19:56:17.519381  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-555265 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.61268784s)
functional_test.go:776: restart took 37.612842984s for "functional-555265" cluster.
I1221 19:56:23.256322  126345 config.go:182] Loaded profile config "functional-555265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (37.61s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-555265 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-555265 logs: (1.329196279s)
--- PASS: TestFunctional/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 logs --file /tmp/TestFunctionalserialLogsFileCmd633459749/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-555265 logs --file /tmp/TestFunctionalserialLogsFileCmd633459749/001/logs.txt: (1.345267196s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.32s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-555265 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-555265
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-555265: exit status 115 (234.475484ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.15:31901 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-555265 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-555265 config get cpus: exit status 14 (66.034267ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-555265 config get cpus: exit status 14 (76.507209ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-555265 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-555265 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (114.707044ms)

                                                
                                                
-- stdout --
	* [functional-555265] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 19:57:10.025476  132170 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:57:10.025615  132170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:57:10.025624  132170 out.go:374] Setting ErrFile to fd 2...
	I1221 19:57:10.025628  132170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:57:10.025812  132170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
	I1221 19:57:10.026302  132170 out.go:368] Setting JSON to false
	I1221 19:57:10.027116  132170 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":13180,"bootTime":1766333850,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 19:57:10.027209  132170 start.go:143] virtualization: kvm guest
	I1221 19:57:10.029051  132170 out.go:179] * [functional-555265] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 19:57:10.030444  132170 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 19:57:10.030452  132170 notify.go:221] Checking for updates...
	I1221 19:57:10.033150  132170 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 19:57:10.034553  132170 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	I1221 19:57:10.035952  132170 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	I1221 19:57:10.037115  132170 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 19:57:10.038609  132170 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 19:57:10.040371  132170 config.go:182] Loaded profile config "functional-555265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:57:10.041180  132170 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 19:57:10.073780  132170 out.go:179] * Using the kvm2 driver based on existing profile
	I1221 19:57:10.075089  132170 start.go:309] selected driver: kvm2
	I1221 19:57:10.075102  132170 start.go:928] validating driver "kvm2" against &{Name:functional-555265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-555265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 19:57:10.075193  132170 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 19:57:10.077235  132170 out.go:203] 
	W1221 19:57:10.078619  132170 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1221 19:57:10.079691  132170 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-555265 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-555265 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-555265 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (111.018758ms)

                                                
                                                
-- stdout --
	* [functional-555265] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 19:57:09.915816  132155 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:57:09.915933  132155 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:57:09.915939  132155 out.go:374] Setting ErrFile to fd 2...
	I1221 19:57:09.915943  132155 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:57:09.916264  132155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
	I1221 19:57:09.916714  132155 out.go:368] Setting JSON to false
	I1221 19:57:09.917575  132155 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":13180,"bootTime":1766333850,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 19:57:09.917632  132155 start.go:143] virtualization: kvm guest
	I1221 19:57:09.919522  132155 out.go:179] * [functional-555265] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1221 19:57:09.920730  132155 notify.go:221] Checking for updates...
	I1221 19:57:09.920762  132155 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 19:57:09.922264  132155 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 19:57:09.923334  132155 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	I1221 19:57:09.924517  132155 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	I1221 19:57:09.925730  132155 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 19:57:09.926969  132155 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 19:57:09.928654  132155 config.go:182] Loaded profile config "functional-555265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:57:09.929150  132155 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 19:57:09.959617  132155 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1221 19:57:09.960597  132155 start.go:309] selected driver: kvm2
	I1221 19:57:09.960611  132155 start.go:928] validating driver "kvm2" against &{Name:functional-555265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-555265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 19:57:09.960717  132155 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 19:57:09.962607  132155 out.go:203] 
	W1221 19:57:09.963638  132155 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1221 19:57:09.964684  132155 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (38.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-555265 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-555265 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-xqzsp" [5ddd6a39-8c3b-401f-bf65-e01981d7058f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-xqzsp" [5ddd6a39-8c3b-401f-bf65-e01981d7058f] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 38.004113575s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.15:30135
functional_test.go:1680: http://192.168.39.15:30135: success! body:
Request served by hello-node-connect-7d85dfc575-xqzsp

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.15:30135
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (38.43s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (81.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [cf89d3de-8549-43e7-b379-c34189591f83] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003691223s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-555265 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-555265 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-555265 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-555265 apply -f testdata/storage-provisioner/pod.yaml
I1221 19:56:37.072585  126345 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [f5750c40-5d09-425e-a2b4-a2c24fb27dde] Pending
helpers_test.go:353: "sp-pod" [f5750c40-5d09-425e-a2b4-a2c24fb27dde] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [f5750c40-5d09-425e-a2b4-a2c24fb27dde] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 1m7.005853436s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-555265 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-555265 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-555265 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [2a95a66b-135b-4cbd-8fad-a3ff9e0dd625] Pending
helpers_test.go:353: "sp-pod" [2a95a66b-135b-4cbd-8fad-a3ff9e0dd625] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004420272s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-555265 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (81.42s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh -n functional-555265 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 cp functional-555265:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1569433664/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh -n functional-555265 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh -n functional-555265 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (111.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-555265 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-vtlfg" [12bfaa54-0a64-409e-9dae-6f1c3a619396] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-vtlfg" [12bfaa54-0a64-409e-9dae-6f1c3a619396] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m45.106945451s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-555265 exec mysql-6bcdcbc558-vtlfg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-555265 exec mysql-6bcdcbc558-vtlfg -- mysql -ppassword -e "show databases;": exit status 1 (192.874349ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1221 19:59:33.100837  126345 retry.go:84] will retry after 1.1s: exit status 1 (duplicate log for 3m0.6s)
functional_test.go:1812: (dbg) Run:  kubectl --context functional-555265 exec mysql-6bcdcbc558-vtlfg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-555265 exec mysql-6bcdcbc558-vtlfg -- mysql -ppassword -e "show databases;": exit status 1 (195.831404ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1812: (dbg) Run:  kubectl --context functional-555265 exec mysql-6bcdcbc558-vtlfg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-555265 exec mysql-6bcdcbc558-vtlfg -- mysql -ppassword -e "show databases;": exit status 1 (131.486383ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1812: (dbg) Run:  kubectl --context functional-555265 exec mysql-6bcdcbc558-vtlfg -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (111.57s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/126345/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh "sudo cat /etc/test/nested/copy/126345/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/126345.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh "sudo cat /etc/ssl/certs/126345.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/126345.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh "sudo cat /usr/share/ca-certificates/126345.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1263452.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh "sudo cat /etc/ssl/certs/1263452.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1263452.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh "sudo cat /usr/share/ca-certificates/1263452.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-555265 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-555265 ssh "sudo systemctl is-active docker": exit status 1 (192.311187ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-555265 ssh "sudo systemctl is-active containerd": exit status 1 (194.143829ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "229.32503ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "60.76582ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "240.554792ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "61.075988ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (65.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-555265 /tmp/TestFunctionalparallelMountCmdany-port557170335/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766346992399771173" to /tmp/TestFunctionalparallelMountCmdany-port557170335/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766346992399771173" to /tmp/TestFunctionalparallelMountCmdany-port557170335/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766346992399771173" to /tmp/TestFunctionalparallelMountCmdany-port557170335/001/test-1766346992399771173
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-555265 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (150.317321ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1221 19:56:32.550386  126345 retry.go:84] will retry after 500ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 21 19:56 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 21 19:56 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 21 19:56 test-1766346992399771173
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh cat /mount-9p/test-1766346992399771173
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-555265 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [cc92143a-3635-4b7b-b54f-0bf476a137f8] Pending
helpers_test.go:353: "busybox-mount" [cc92143a-3635-4b7b-b54f-0bf476a137f8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [cc92143a-3635-4b7b-b54f-0bf476a137f8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [cc92143a-3635-4b7b-b54f-0bf476a137f8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 1m4.004364784s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-555265 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-555265 /tmp/TestFunctionalparallelMountCmdany-port557170335/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (65.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-555265 /tmp/TestFunctionalparallelMountCmdspecific-port2895988610/001:/mount-9p --alsologtostderr -v=1 --port 35235]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-555265 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (150.67873ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1221 19:57:38.435519  126345 retry.go:84] will retry after 500ms: exit status 1 (duplicate log for 1m5.9s)
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-555265 /tmp/TestFunctionalparallelMountCmdspecific-port2895988610/001:/mount-9p --alsologtostderr -v=1 --port 35235] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-555265 ssh "sudo umount -f /mount-9p": exit status 1 (149.859231ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-555265 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-555265 /tmp/TestFunctionalparallelMountCmdspecific-port2895988610/001:/mount-9p --alsologtostderr -v=1 --port 35235] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-555265 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2519347037/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-555265 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2519347037/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-555265 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2519347037/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-555265 ssh "findmnt -T" /mount1: exit status 1 (158.083955ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-555265 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-555265 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2519347037/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-555265 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2519347037/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-555265 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2519347037/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-555265 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-555265
localhost/kicbase/echo-server:functional-555265
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-555265 image ls --format short --alsologtostderr:
I1221 19:57:52.180150  132895 out.go:360] Setting OutFile to fd 1 ...
I1221 19:57:52.180286  132895 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:57:52.180301  132895 out.go:374] Setting ErrFile to fd 2...
I1221 19:57:52.180309  132895 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:57:52.180520  132895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
I1221 19:57:52.181112  132895 config.go:182] Loaded profile config "functional-555265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:57:52.181215  132895 config.go:182] Loaded profile config "functional-555265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:57:52.183228  132895 ssh_runner.go:195] Run: systemctl --version
I1221 19:57:52.185352  132895 main.go:144] libmachine: domain functional-555265 has defined MAC address 52:54:00:0e:a2:3b in network mk-functional-555265
I1221 19:57:52.185792  132895 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0e:a2:3b", ip: ""} in network mk-functional-555265: {Iface:virbr1 ExpiryTime:2025-12-21 20:53:57 +0000 UTC Type:0 Mac:52:54:00:0e:a2:3b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-555265 Clientid:01:52:54:00:0e:a2:3b}
I1221 19:57:52.185834  132895 main.go:144] libmachine: domain functional-555265 has defined IP address 192.168.39.15 and MAC address 52:54:00:0e:a2:3b in network mk-functional-555265
I1221 19:57:52.185981  132895 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/functional-555265/id_rsa Username:docker}
I1221 19:57:52.264837  132895 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-555265 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver          │ v1.34.3            │ aa27095f56193 │ 89.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.3            │ 5826b25d990d7 │ 76MB   │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/my-image                      │ functional-555265  │ 5b6b1d90acfad │ 1.47MB │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-555265  │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-proxy              │ v1.34.3            │ 36eef8e07bdd6 │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.3            │ aec12dadf56dd │ 53.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ localhost/minikube-local-cache-test     │ functional-555265  │ cd4b3eb4f6a9f │ 3.33kB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ 04da2b0513cd7 │ 55.2MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-555265 image ls --format table --alsologtostderr:
I1221 19:57:55.477719  132978 out.go:360] Setting OutFile to fd 1 ...
I1221 19:57:55.477959  132978 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:57:55.477968  132978 out.go:374] Setting ErrFile to fd 2...
I1221 19:57:55.477972  132978 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:57:55.478141  132978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
I1221 19:57:55.478716  132978 config.go:182] Loaded profile config "functional-555265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:57:55.478817  132978 config.go:182] Loaded profile config "functional-555265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:57:55.481028  132978 ssh_runner.go:195] Run: systemctl --version
I1221 19:57:55.483368  132978 main.go:144] libmachine: domain functional-555265 has defined MAC address 52:54:00:0e:a2:3b in network mk-functional-555265
I1221 19:57:55.483824  132978 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0e:a2:3b", ip: ""} in network mk-functional-555265: {Iface:virbr1 ExpiryTime:2025-12-21 20:53:57 +0000 UTC Type:0 Mac:52:54:00:0e:a2:3b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-555265 Clientid:01:52:54:00:0e:a2:3b}
I1221 19:57:55.483851  132978 main.go:144] libmachine: domain functional-555265 has defined IP address 192.168.39.15 and MAC address 52:54:00:0e:a2:3b in network mk-functional-555265
I1221 19:57:55.483988  132978 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/functional-555265/id_rsa Username:docker}
I1221 19:57:55.560174  132978 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-555265 image ls --format json --alsologtostderr:
[{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691","repoDigests":["registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6","registry.k8s.io/kube-proxy@sha256:aee44d152c9eaa4f3e10584e61ee501a094880168db257af1201c806982a0945"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.3"],"size":"73145241"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pau
se:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"36456489cc1cccfb184e92b89d68720ede4cacb9a5539b7e5b9db7c965bc18d9","repoDigests":["docker.io/library/d887d5cb6a9a552296e51ca1683d3ffbe2a811ab603a3264d96c70f73f24cef3-tmp@sha256:640d50b24fdaebbe1a821f1f166416997699d6d4aac22a923dd605f771920efc"],"repoTags":[],"size":"1466018"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"cd4b3eb4f6a9f04b8fd36391d00b3b8d4ea7679dc853f53c03e9a822919dfda5",
"repoDigests":["localhost/minikube-local-cache-test@sha256:2af204331f5c434097f6c1bd08c87a0a4daf41fb36bd2ea58c587ff8acee3222"],"repoTags":["localhost/minikube-local-cache-test:functional-555265"],"size":"3330"},{"id":"aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460","registry.k8s.io/kube-apiserver@sha256:9b2e9bae4dc94991e51c601ba6a00369b45064243ba7822143b286edb9d41f9e"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.3"],"size":"89050097"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289
c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"5b6b1d90acfadfddc6d7383d7f6319b67b6b30e902095c65aaa40fb534513b17","repoDigests":["localhost/my-image@sha256:50b61b5416f9ea2c3ccdf16ab07a4418a261672f6da94628272b9cd4a60171b4"],"repoTags":["localhost/my-image:functional-555265"],"size":"1468599"},{"id":"aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78","repoDigests":["registry.k8s.io/kube-scheduler@sha256:490ff7b484d67db4a77e8d4bba9f12da68f6a3cae8da3b977522b60c8b5092c9","registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.3"],"size":"53853013"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.i
o/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sh
a256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-555265"],"size":"4944818"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5","repoDigests":["public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c","public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55157106"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns
/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954","registry.k8s.io/kube-controller-manager@sha256:90ceecee64b3dac0e619928b9b21522bde1a120bb039971110ab68d830c1f1a2"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"76004183"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-555265 image ls --format json --alsologtostderr:
I1221 19:57:55.295676  132967 out.go:360] Setting OutFile to fd 1 ...
I1221 19:57:55.295796  132967 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:57:55.295802  132967 out.go:374] Setting ErrFile to fd 2...
I1221 19:57:55.295806  132967 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:57:55.295992  132967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
I1221 19:57:55.296558  132967 config.go:182] Loaded profile config "functional-555265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:57:55.296659  132967 config.go:182] Loaded profile config "functional-555265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:57:55.298653  132967 ssh_runner.go:195] Run: systemctl --version
I1221 19:57:55.300795  132967 main.go:144] libmachine: domain functional-555265 has defined MAC address 52:54:00:0e:a2:3b in network mk-functional-555265
I1221 19:57:55.301183  132967 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0e:a2:3b", ip: ""} in network mk-functional-555265: {Iface:virbr1 ExpiryTime:2025-12-21 20:53:57 +0000 UTC Type:0 Mac:52:54:00:0e:a2:3b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-555265 Clientid:01:52:54:00:0e:a2:3b}
I1221 19:57:55.301212  132967 main.go:144] libmachine: domain functional-555265 has defined IP address 192.168.39.15 and MAC address 52:54:00:0e:a2:3b in network mk-functional-555265
I1221 19:57:55.301394  132967 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/functional-555265/id_rsa Username:docker}
I1221 19:57:55.377682  132967 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-555265 image ls --format yaml --alsologtostderr:
- id: cd4b3eb4f6a9f04b8fd36391d00b3b8d4ea7679dc853f53c03e9a822919dfda5
repoDigests:
- localhost/minikube-local-cache-test@sha256:2af204331f5c434097f6c1bd08c87a0a4daf41fb36bd2ea58c587ff8acee3222
repoTags:
- localhost/minikube-local-cache-test:functional-555265
size: "3330"
- id: aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:490ff7b484d67db4a77e8d4bba9f12da68f6a3cae8da3b977522b60c8b5092c9
- registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "53853013"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-555265
size: "4944818"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954
- registry.k8s.io/kube-controller-manager@sha256:90ceecee64b3dac0e619928b9b21522bde1a120bb039971110ab68d830c1f1a2
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "76004183"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55157106"
- id: aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460
- registry.k8s.io/kube-apiserver@sha256:9b2e9bae4dc94991e51c601ba6a00369b45064243ba7822143b286edb9d41f9e
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "89050097"
- id: 36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691
repoDigests:
- registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6
- registry.k8s.io/kube-proxy@sha256:aee44d152c9eaa4f3e10584e61ee501a094880168db257af1201c806982a0945
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "73145241"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-555265 image ls --format yaml --alsologtostderr:
I1221 19:57:52.360113  132906 out.go:360] Setting OutFile to fd 1 ...
I1221 19:57:52.360406  132906 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:57:52.360417  132906 out.go:374] Setting ErrFile to fd 2...
I1221 19:57:52.360421  132906 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:57:52.360655  132906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
I1221 19:57:52.361241  132906 config.go:182] Loaded profile config "functional-555265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:57:52.361375  132906 config.go:182] Loaded profile config "functional-555265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:57:52.363534  132906 ssh_runner.go:195] Run: systemctl --version
I1221 19:57:52.366093  132906 main.go:144] libmachine: domain functional-555265 has defined MAC address 52:54:00:0e:a2:3b in network mk-functional-555265
I1221 19:57:52.366534  132906 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0e:a2:3b", ip: ""} in network mk-functional-555265: {Iface:virbr1 ExpiryTime:2025-12-21 20:53:57 +0000 UTC Type:0 Mac:52:54:00:0e:a2:3b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-555265 Clientid:01:52:54:00:0e:a2:3b}
I1221 19:57:52.366570  132906 main.go:144] libmachine: domain functional-555265 has defined IP address 192.168.39.15 and MAC address 52:54:00:0e:a2:3b in network mk-functional-555265
I1221 19:57:52.366735  132906 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/functional-555265/id_rsa Username:docker}
I1221 19:57:52.444369  132906 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-555265 ssh pgrep buildkitd: exit status 1 (148.241865ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 image build -t localhost/my-image:functional-555265 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-555265 image build -t localhost/my-image:functional-555265 testdata/build --alsologtostderr: (2.422298929s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-555265 image build -t localhost/my-image:functional-555265 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 36456489cc1
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-555265
--> 5b6b1d90acf
Successfully tagged localhost/my-image:functional-555265
5b6b1d90acfadfddc6d7383d7f6319b67b6b30e902095c65aaa40fb534513b17
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-555265 image build -t localhost/my-image:functional-555265 testdata/build --alsologtostderr:
I1221 19:57:52.689826  132929 out.go:360] Setting OutFile to fd 1 ...
I1221 19:57:52.689937  132929 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:57:52.689946  132929 out.go:374] Setting ErrFile to fd 2...
I1221 19:57:52.689950  132929 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:57:52.690116  132929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
I1221 19:57:52.690714  132929 config.go:182] Loaded profile config "functional-555265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:57:52.691429  132929 config.go:182] Loaded profile config "functional-555265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:57:52.693707  132929 ssh_runner.go:195] Run: systemctl --version
I1221 19:57:52.696003  132929 main.go:144] libmachine: domain functional-555265 has defined MAC address 52:54:00:0e:a2:3b in network mk-functional-555265
I1221 19:57:52.696477  132929 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0e:a2:3b", ip: ""} in network mk-functional-555265: {Iface:virbr1 ExpiryTime:2025-12-21 20:53:57 +0000 UTC Type:0 Mac:52:54:00:0e:a2:3b Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-555265 Clientid:01:52:54:00:0e:a2:3b}
I1221 19:57:52.696532  132929 main.go:144] libmachine: domain functional-555265 has defined IP address 192.168.39.15 and MAC address 52:54:00:0e:a2:3b in network mk-functional-555265
I1221 19:57:52.696717  132929 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/functional-555265/id_rsa Username:docker}
I1221 19:57:52.772663  132929 build_images.go:162] Building image from path: /tmp/build.2523140156.tar
I1221 19:57:52.772743  132929 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1221 19:57:52.784541  132929 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2523140156.tar
I1221 19:57:52.789569  132929 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2523140156.tar: stat -c "%s %y" /var/lib/minikube/build/build.2523140156.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2523140156.tar': No such file or directory
I1221 19:57:52.789603  132929 ssh_runner.go:362] scp /tmp/build.2523140156.tar --> /var/lib/minikube/build/build.2523140156.tar (3072 bytes)
I1221 19:57:52.826810  132929 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2523140156
I1221 19:57:52.842034  132929 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2523140156 -xf /var/lib/minikube/build/build.2523140156.tar
I1221 19:57:52.856722  132929 crio.go:315] Building image: /var/lib/minikube/build/build.2523140156
I1221 19:57:52.856800  132929 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-555265 /var/lib/minikube/build/build.2523140156 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1221 19:57:55.021334  132929 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-555265 /var/lib/minikube/build/build.2523140156 --cgroup-manager=cgroupfs: (2.164455875s)
I1221 19:57:55.021443  132929 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2523140156
I1221 19:57:55.035720  132929 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2523140156.tar
I1221 19:57:55.048890  132929 build_images.go:218] Built localhost/my-image:functional-555265 from /tmp/build.2523140156.tar
I1221 19:57:55.048929  132929 build_images.go:134] succeeded building to: functional-555265
I1221 19:57:55.048934  132929 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-555265
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 image load --daemon kicbase/echo-server:functional-555265 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 image load --daemon kicbase/echo-server:functional-555265 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-555265
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 image load --daemon kicbase/echo-server:functional-555265 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 image save kicbase/echo-server:functional-555265 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 image rm kicbase/echo-server:functional-555265 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
I1221 19:57:45.005996  126345 detect.go:223] nested VM detected
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-555265
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 image save --daemon kicbase/echo-server:functional-555265 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-555265
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 update-context --alsologtostderr -v=2
E1221 19:58:33.673200  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:59:01.360612  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-555265 service list: (1.195217804s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-555265 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-555265 service list -o json: (1.192474555s)
functional_test.go:1504: Took "1.192589987s" to run "out/minikube-linux-amd64 -p functional-555265 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.19s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-555265
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-555265
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-555265
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22179-122429/.minikube/files/etc/test/nested/copy/126345/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (75.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-089730 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-089730 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (1m15.326952407s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (75.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (26.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1221 20:07:53.242329  126345 config.go:182] Loaded profile config "functional-089730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-089730 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-089730 --alsologtostderr -v=8: (26.763699437s)
functional_test.go:678: soft start took 26.76413136s for "functional-089730" cluster.
I1221 20:08:20.006410  126345 config.go:182] Loaded profile config "functional-089730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (26.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-089730 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-089730 cache add registry.k8s.io/pause:3.3: (1.005205013s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-089730 cache add registry.k8s.io/pause:latest: (1.06308602s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-089730 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC1763697824/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 cache add minikube-local-cache-test:functional-089730
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 cache delete minikube-local-cache-test:functional-089730
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-089730
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-089730 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (177.965827ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 kubectl -- --context functional-089730 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-089730 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (32.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-089730 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1221 20:08:33.674772  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-089730 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.94486529s)
functional_test.go:776: restart took 32.944997074s for "functional-089730" cluster.
I1221 20:08:59.356366  126345 config.go:182] Loaded profile config "functional-089730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (32.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-089730 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-089730 logs: (1.317757003s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi4088077170/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-089730 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi4088077170/001/logs.txt: (1.330569463s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-089730 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-089730
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-089730: exit status 115 (235.375364ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.143:32371 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-089730 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-089730 delete -f testdata/invalidsvc.yaml: (1.136911185s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-089730 config get cpus: exit status 14 (62.181711ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-089730 config get cpus: exit status 14 (79.642257ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-089730 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-089730 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (110.644107ms)

                                                
                                                
-- stdout --
	* [functional-089730] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 20:10:48.633480  137593 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:10:48.633765  137593 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:10:48.633775  137593 out.go:374] Setting ErrFile to fd 2...
	I1221 20:10:48.633780  137593 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:10:48.633988  137593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
	I1221 20:10:48.634422  137593 out.go:368] Setting JSON to false
	I1221 20:10:48.635238  137593 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":13999,"bootTime":1766333850,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 20:10:48.635307  137593 start.go:143] virtualization: kvm guest
	I1221 20:10:48.637609  137593 out.go:179] * [functional-089730] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 20:10:48.639206  137593 notify.go:221] Checking for updates...
	I1221 20:10:48.639241  137593 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 20:10:48.640647  137593 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 20:10:48.641896  137593 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	I1221 20:10:48.643068  137593 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	I1221 20:10:48.644308  137593 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 20:10:48.645663  137593 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 20:10:48.647462  137593 config.go:182] Loaded profile config "functional-089730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:10:48.647947  137593 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 20:10:48.678612  137593 out.go:179] * Using the kvm2 driver based on existing profile
	I1221 20:10:48.679708  137593 start.go:309] selected driver: kvm2
	I1221 20:10:48.679724  137593 start.go:928] validating driver "kvm2" against &{Name:functional-089730 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-089730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.143 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:10:48.679849  137593 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 20:10:48.681729  137593 out.go:203] 
	W1221 20:10:48.682820  137593 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1221 20:10:48.683771  137593 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-089730 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-089730 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-089730 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (122.803591ms)

                                                
                                                
-- stdout --
	* [functional-089730] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 20:10:48.874776  137625 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:10:48.874961  137625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:10:48.874974  137625 out.go:374] Setting ErrFile to fd 2...
	I1221 20:10:48.874980  137625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:10:48.875432  137625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
	I1221 20:10:48.876089  137625 out.go:368] Setting JSON to false
	I1221 20:10:48.877361  137625 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":13999,"bootTime":1766333850,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 20:10:48.877445  137625 start.go:143] virtualization: kvm guest
	I1221 20:10:48.879534  137625 out.go:179] * [functional-089730] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1221 20:10:48.881061  137625 notify.go:221] Checking for updates...
	I1221 20:10:48.881083  137625 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 20:10:48.882524  137625 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 20:10:48.884057  137625 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	I1221 20:10:48.885365  137625 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	I1221 20:10:48.886412  137625 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 20:10:48.887645  137625 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 20:10:48.889464  137625 config.go:182] Loaded profile config "functional-089730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:10:48.890237  137625 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 20:10:48.921546  137625 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1221 20:10:48.922904  137625 start.go:309] selected driver: kvm2
	I1221 20:10:48.922919  137625 start.go:928] validating driver "kvm2" against &{Name:functional-089730 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-089730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.143 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:10:48.923086  137625 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 20:10:48.925076  137625 out.go:203] 
	W1221 20:10:48.926169  137625 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1221 20:10:48.927191  137625 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (0.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (0.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (106.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [f4291be3-1c09-465a-9574-d7d70f9846bf] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004793211s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-089730 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-089730 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-089730 get pvc myclaim -o=json
I1221 20:09:14.575653  126345 retry.go:84] will retry after 2.3s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:f08b720f-0983-4b2d-b13f-1563d1ae6a07 ResourceVersion:728 Generation:0 CreationTimestamp:2025-12-21 20:09:14 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001839a00 VolumeMode:0xc001839a10 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-089730 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-089730 apply -f testdata/storage-provisioner/pod.yaml
I1221 20:09:17.077587  126345 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [18b36476-f9e3-4d60-abc0-81d26cf443ff] Pending
helpers_test.go:353: "sp-pod" [18b36476-f9e3-4d60-abc0-81d26cf443ff] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [18b36476-f9e3-4d60-abc0-81d26cf443ff] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 1m30.004234602s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-089730 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-089730 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-089730 delete -f testdata/storage-provisioner/pod.yaml: (1.065977022s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-089730 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [e23cc32d-4ece-469e-ab30-b9d6da91c272] Pending
helpers_test.go:353: "sp-pod" [e23cc32d-4ece-469e-ab30-b9d6da91c272] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004897161s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-089730 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (106.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh -n functional-089730 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 cp functional-089730:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm562112966/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh -n functional-089730 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh -n functional-089730 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (28.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-089730 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-9r6m2" [289487f7-0d17-49ed-81be-8171c3228316] Pending
helpers_test.go:353: "mysql-7d7b65bc95-9r6m2" [289487f7-0d17-49ed-81be-8171c3228316] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-9r6m2" [289487f7-0d17-49ed-81be-8171c3228316] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: app=mysql healthy within 22.004067598s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-089730 exec mysql-7d7b65bc95-9r6m2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-089730 exec mysql-7d7b65bc95-9r6m2 -- mysql -ppassword -e "show databases;": exit status 1 (165.81232ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1221 20:09:29.502370  126345 retry.go:84] will retry after 1.5s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-089730 exec mysql-7d7b65bc95-9r6m2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-089730 exec mysql-7d7b65bc95-9r6m2 -- mysql -ppassword -e "show databases;": exit status 1 (126.901253ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1812: (dbg) Run:  kubectl --context functional-089730 exec mysql-7d7b65bc95-9r6m2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-089730 exec mysql-7d7b65bc95-9r6m2 -- mysql -ppassword -e "show databases;": exit status 1 (117.353176ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1812: (dbg) Run:  kubectl --context functional-089730 exec mysql-7d7b65bc95-9r6m2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (28.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/126345/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh "sudo cat /etc/test/nested/copy/126345/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/126345.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh "sudo cat /etc/ssl/certs/126345.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/126345.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh "sudo cat /usr/share/ca-certificates/126345.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1263452.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh "sudo cat /etc/ssl/certs/1263452.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1263452.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh "sudo cat /usr/share/ca-certificates/1263452.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-089730 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-089730 ssh "sudo systemctl is-active docker": exit status 1 (223.251245ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-089730 ssh "sudo systemctl is-active containerd": exit status 1 (201.957588ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-089730 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-089730
localhost/kicbase/echo-server:functional-089730
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-089730 image ls --format short --alsologtostderr:
I1221 20:10:53.191189  137784 out.go:360] Setting OutFile to fd 1 ...
I1221 20:10:53.191440  137784 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 20:10:53.191449  137784 out.go:374] Setting ErrFile to fd 2...
I1221 20:10:53.191453  137784 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 20:10:53.191635  137784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
I1221 20:10:53.192168  137784 config.go:182] Loaded profile config "functional-089730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1221 20:10:53.192279  137784 config.go:182] Loaded profile config "functional-089730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1221 20:10:53.194415  137784 ssh_runner.go:195] Run: systemctl --version
I1221 20:10:53.196539  137784 main.go:144] libmachine: domain functional-089730 has defined MAC address 52:54:00:6a:61:1e in network mk-functional-089730
I1221 20:10:53.196918  137784 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:61:1e", ip: ""} in network mk-functional-089730: {Iface:virbr1 ExpiryTime:2025-12-21 21:06:53 +0000 UTC Type:0 Mac:52:54:00:6a:61:1e Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:functional-089730 Clientid:01:52:54:00:6a:61:1e}
I1221 20:10:53.196949  137784 main.go:144] libmachine: domain functional-089730 has defined IP address 192.168.39.143 and MAC address 52:54:00:6a:61:1e in network mk-functional-089730
I1221 20:10:53.197083  137784 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/functional-089730/id_rsa Username:docker}
I1221 20:10:53.283244  137784 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-089730 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ localhost/kicbase/echo-server           │ functional-089730  │ 9056ab77afb8e │ 4.94MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ 04da2b0513cd7 │ 55.2MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/kube-proxy              │ v1.35.0-rc.1       │ af0321f3a4f38 │ 72MB   │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-rc.1       │ 58865405a13bc │ 90.8MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ localhost/minikube-local-cache-test     │ functional-089730  │ cd4b3eb4f6a9f │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.6-0            │ 0a108f7189562 │ 63.6MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-rc.1       │ 5032a56602e1b │ 76.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-rc.1       │ 73f80cdc073da │ 52.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-089730 image ls --format table --alsologtostderr:
I1221 20:10:54.776090  137849 out.go:360] Setting OutFile to fd 1 ...
I1221 20:10:54.776351  137849 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 20:10:54.776360  137849 out.go:374] Setting ErrFile to fd 2...
I1221 20:10:54.776364  137849 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 20:10:54.776589  137849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
I1221 20:10:54.777183  137849 config.go:182] Loaded profile config "functional-089730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1221 20:10:54.777280  137849 config.go:182] Loaded profile config "functional-089730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1221 20:10:54.779303  137849 ssh_runner.go:195] Run: systemctl --version
I1221 20:10:54.781599  137849 main.go:144] libmachine: domain functional-089730 has defined MAC address 52:54:00:6a:61:1e in network mk-functional-089730
I1221 20:10:54.782016  137849 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:61:1e", ip: ""} in network mk-functional-089730: {Iface:virbr1 ExpiryTime:2025-12-21 21:06:53 +0000 UTC Type:0 Mac:52:54:00:6a:61:1e Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:functional-089730 Clientid:01:52:54:00:6a:61:1e}
I1221 20:10:54.782038  137849 main.go:144] libmachine: domain functional-089730 has defined IP address 192.168.39.143 and MAC address 52:54:00:6a:61:1e in network mk-functional-089730
I1221 20:10:54.782172  137849 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/functional-089730/id_rsa Username:docker}
I1221 20:10:54.866257  137849 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-089730 image ls --format json --alsologtostderr:
[{"id":"cd4b3eb4f6a9f04b8fd36391d00b3b8d4ea7679dc853f53c03e9a822919dfda5","repoDigests":["localhost/minikube-local-cache-test@sha256:2af204331f5c434097f6c1bd08c87a0a4daf41fb36bd2ea58c587ff8acee3222"],"repoTags":["localhost/minikube-local-cache-test:functional-089730"],"size":"3330"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5","repoDigests":["public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c","public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"
55157106"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":["registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a","registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"63582405"},{"id":"5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98","registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"76893010"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcdde
ffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-089730"],"size":"4943877"},{"id":"58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce","repoDigests":["registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f","registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"90844140"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca0
8dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636","registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-rc.1"],"size":"52763474"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604
a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io
/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9","registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"71986585"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-089730 image ls --format json --alsologtostderr:
I1221 20:10:54.572313  137838 out.go:360] Setting OutFile to fd 1 ...
I1221 20:10:54.572425  137838 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 20:10:54.572431  137838 out.go:374] Setting ErrFile to fd 2...
I1221 20:10:54.572435  137838 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 20:10:54.572654  137838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
I1221 20:10:54.573185  137838 config.go:182] Loaded profile config "functional-089730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1221 20:10:54.573278  137838 config.go:182] Loaded profile config "functional-089730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1221 20:10:54.575371  137838 ssh_runner.go:195] Run: systemctl --version
I1221 20:10:54.577616  137838 main.go:144] libmachine: domain functional-089730 has defined MAC address 52:54:00:6a:61:1e in network mk-functional-089730
I1221 20:10:54.578121  137838 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:61:1e", ip: ""} in network mk-functional-089730: {Iface:virbr1 ExpiryTime:2025-12-21 21:06:53 +0000 UTC Type:0 Mac:52:54:00:6a:61:1e Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:functional-089730 Clientid:01:52:54:00:6a:61:1e}
I1221 20:10:54.578156  137838 main.go:144] libmachine: domain functional-089730 has defined IP address 192.168.39.143 and MAC address 52:54:00:6a:61:1e in network mk-functional-089730
I1221 20:10:54.578448  137838 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/functional-089730/id_rsa Username:docker}
I1221 20:10:54.664209  137838 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-089730 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests:
- registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "63582405"
- id: 5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98
- registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "76893010"
- id: af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9
- registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "71986585"
- id: 73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636
- registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "52763474"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-089730
size: "4943877"
- id: 04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55157106"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cd4b3eb4f6a9f04b8fd36391d00b3b8d4ea7679dc853f53c03e9a822919dfda5
repoDigests:
- localhost/minikube-local-cache-test@sha256:2af204331f5c434097f6c1bd08c87a0a4daf41fb36bd2ea58c587ff8acee3222
repoTags:
- localhost/minikube-local-cache-test:functional-089730
size: "3330"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: 58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f
- registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "90844140"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-089730 image ls --format yaml --alsologtostderr:
I1221 20:10:53.383696  137796 out.go:360] Setting OutFile to fd 1 ...
I1221 20:10:53.383934  137796 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 20:10:53.383942  137796 out.go:374] Setting ErrFile to fd 2...
I1221 20:10:53.383947  137796 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 20:10:53.384146  137796 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
I1221 20:10:53.384715  137796 config.go:182] Loaded profile config "functional-089730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1221 20:10:53.384809  137796 config.go:182] Loaded profile config "functional-089730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1221 20:10:53.386814  137796 ssh_runner.go:195] Run: systemctl --version
I1221 20:10:53.389386  137796 main.go:144] libmachine: domain functional-089730 has defined MAC address 52:54:00:6a:61:1e in network mk-functional-089730
I1221 20:10:53.390441  137796 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:61:1e", ip: ""} in network mk-functional-089730: {Iface:virbr1 ExpiryTime:2025-12-21 21:06:53 +0000 UTC Type:0 Mac:52:54:00:6a:61:1e Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:functional-089730 Clientid:01:52:54:00:6a:61:1e}
I1221 20:10:53.390474  137796 main.go:144] libmachine: domain functional-089730 has defined IP address 192.168.39.143 and MAC address 52:54:00:6a:61:1e in network mk-functional-089730
I1221 20:10:53.390699  137796 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/functional-089730/id_rsa Username:docker}
I1221 20:10:53.475562  137796 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (2.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-089730 ssh pgrep buildkitd: exit status 1 (159.398426ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 image build -t localhost/my-image:functional-089730 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-089730 image build -t localhost/my-image:functional-089730 testdata/build --alsologtostderr: (2.324663154s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-089730 image build -t localhost/my-image:functional-089730 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8c5e4b6a898
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-089730
--> 28b0bde1a89
Successfully tagged localhost/my-image:functional-089730
28b0bde1a89ddfd5d0aefe09b91262b5bea7e9d0cee4b2e25fce03dd7330788d
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-089730 image build -t localhost/my-image:functional-089730 testdata/build --alsologtostderr:
I1221 20:10:53.730212  137817 out.go:360] Setting OutFile to fd 1 ...
I1221 20:10:53.730309  137817 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 20:10:53.730314  137817 out.go:374] Setting ErrFile to fd 2...
I1221 20:10:53.730318  137817 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 20:10:53.730502  137817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
I1221 20:10:53.731055  137817 config.go:182] Loaded profile config "functional-089730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1221 20:10:53.731628  137817 config.go:182] Loaded profile config "functional-089730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1221 20:10:53.733655  137817 ssh_runner.go:195] Run: systemctl --version
I1221 20:10:53.735601  137817 main.go:144] libmachine: domain functional-089730 has defined MAC address 52:54:00:6a:61:1e in network mk-functional-089730
I1221 20:10:53.735958  137817 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:61:1e", ip: ""} in network mk-functional-089730: {Iface:virbr1 ExpiryTime:2025-12-21 21:06:53 +0000 UTC Type:0 Mac:52:54:00:6a:61:1e Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:functional-089730 Clientid:01:52:54:00:6a:61:1e}
I1221 20:10:53.735982  137817 main.go:144] libmachine: domain functional-089730 has defined IP address 192.168.39.143 and MAC address 52:54:00:6a:61:1e in network mk-functional-089730
I1221 20:10:53.736085  137817 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/functional-089730/id_rsa Username:docker}
I1221 20:10:53.824724  137817 build_images.go:162] Building image from path: /tmp/build.633405772.tar
I1221 20:10:53.824809  137817 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1221 20:10:53.844557  137817 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.633405772.tar
I1221 20:10:53.850648  137817 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.633405772.tar: stat -c "%s %y" /var/lib/minikube/build/build.633405772.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.633405772.tar': No such file or directory
I1221 20:10:53.850691  137817 ssh_runner.go:362] scp /tmp/build.633405772.tar --> /var/lib/minikube/build/build.633405772.tar (3072 bytes)
I1221 20:10:53.882190  137817 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.633405772
I1221 20:10:53.894082  137817 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.633405772 -xf /var/lib/minikube/build/build.633405772.tar
I1221 20:10:53.905198  137817 crio.go:315] Building image: /var/lib/minikube/build/build.633405772
I1221 20:10:53.905266  137817 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-089730 /var/lib/minikube/build/build.633405772 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1221 20:10:55.965307  137817 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-089730 /var/lib/minikube/build/build.633405772 --cgroup-manager=cgroupfs: (2.059998841s)
I1221 20:10:55.965387  137817 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.633405772
I1221 20:10:55.979208  137817 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.633405772.tar
I1221 20:10:55.992597  137817 build_images.go:218] Built localhost/my-image:functional-089730 from /tmp/build.633405772.tar
I1221 20:10:55.992638  137817 build_images.go:134] succeeded building to: functional-089730
I1221 20:10:55.992643  137817 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 image ls
E1221 20:11:30.705082  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:11:30.710438  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:11:30.720770  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:11:30.741090  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:11:30.781503  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:11:30.862223  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:11:31.022703  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:11:31.343320  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:11:31.984368  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:11:33.264969  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:11:35.825560  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:11:40.946672  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:11:51.187209  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:12:11.668037  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:12:52.628923  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:13:33.673306  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:14:14.550087  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:16:30.705058  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:16:58.390320  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:18:33.673507  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (2.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-089730
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 image load --daemon kicbase/echo-server:functional-089730 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-089730 image load --daemon kicbase/echo-server:functional-089730 --alsologtostderr: (1.090774048s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 image load --daemon kicbase/echo-server:functional-089730 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-089730
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 image load --daemon kicbase/echo-server:functional-089730 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (0.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 image save kicbase/echo-server:functional-089730 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 image rm kicbase/echo-server:functional-089730 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-089730
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 image save --daemon kicbase/echo-server:functional-089730 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-089730
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "234.869748ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "64.883004ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "247.238051ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "63.214374ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (63.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-089730 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1488401894/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766347782880052427" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1488401894/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766347782880052427" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1488401894/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766347782880052427" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1488401894/001/test-1766347782880052427
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-089730 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (158.241484ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1221 20:09:43.038587  126345 retry.go:84] will retry after 700ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 21 20:09 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 21 20:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 21 20:09 test-1766347782880052427
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh cat /mount-9p/test-1766347782880052427
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-089730 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [23efd88c-e136-4f52-9ec1-1a751b7895ba] Pending
helpers_test.go:353: "busybox-mount" [23efd88c-e136-4f52-9ec1-1a751b7895ba] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E1221 20:09:56.721627  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox-mount" [23efd88c-e136-4f52-9ec1-1a751b7895ba] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [23efd88c-e136-4f52-9ec1-1a751b7895ba] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 1m1.003734173s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-089730 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-089730 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1488401894/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (63.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-089730 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1363950450/001:/mount-9p --alsologtostderr -v=1 --port 32807]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-089730 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (160.674841ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1221 20:10:46.139447  126345 retry.go:84] will retry after 400ms: exit status 1 (duplicate log for 1m16.6s)
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-089730 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1363950450/001:/mount-9p --alsologtostderr -v=1 --port 32807] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-089730 ssh "sudo umount -f /mount-9p": exit status 1 (164.109165ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-089730 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-089730 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1363950450/001:/mount-9p --alsologtostderr -v=1 --port 32807] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-089730 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1947649763/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-089730 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1947649763/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-089730 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1947649763/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-089730 ssh "findmnt -T" /mount1: exit status 1 (180.342658ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 ssh "findmnt -T" /mount3
I1221 20:10:48.397457  126345 detect.go:223] nested VM detected
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-089730 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-089730 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1947649763/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-089730 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1947649763/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-089730 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1947649763/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (1.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-089730 service list: (1.210970683s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (1.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (1.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-089730 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-089730 service list -o json: (1.230438701s)
functional_test.go:1504: Took "1.230533717s" to run "out/minikube-linux-amd64 -p functional-089730 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (1.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-089730
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-089730
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-089730
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (220.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1221 20:21:30.705644  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-167359 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m39.428567482s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (220.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-167359 kubectl -- rollout status deployment/busybox: (4.256962961s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 kubectl -- exec busybox-7b57f96db7-n5rzj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 kubectl -- exec busybox-7b57f96db7-wvc4l -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 kubectl -- exec busybox-7b57f96db7-z5zr4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 kubectl -- exec busybox-7b57f96db7-n5rzj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 kubectl -- exec busybox-7b57f96db7-wvc4l -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 kubectl -- exec busybox-7b57f96db7-z5zr4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 kubectl -- exec busybox-7b57f96db7-n5rzj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 kubectl -- exec busybox-7b57f96db7-wvc4l -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 kubectl -- exec busybox-7b57f96db7-z5zr4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 kubectl -- exec busybox-7b57f96db7-n5rzj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 kubectl -- exec busybox-7b57f96db7-n5rzj -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 kubectl -- exec busybox-7b57f96db7-wvc4l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 kubectl -- exec busybox-7b57f96db7-wvc4l -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 kubectl -- exec busybox-7b57f96db7-z5zr4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 kubectl -- exec busybox-7b57f96db7-z5zr4 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (44.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 node add --alsologtostderr -v 5
E1221 20:23:33.674261  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-167359 node add --alsologtostderr -v 5: (43.688052706s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (44.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-167359 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (11.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 cp testdata/cp-test.txt ha-167359:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 cp ha-167359:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2574893903/001/cp-test_ha-167359.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 cp ha-167359:/home/docker/cp-test.txt ha-167359-m02:/home/docker/cp-test_ha-167359_ha-167359-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359-m02 "sudo cat /home/docker/cp-test_ha-167359_ha-167359-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 cp ha-167359:/home/docker/cp-test.txt ha-167359-m03:/home/docker/cp-test_ha-167359_ha-167359-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359-m03 "sudo cat /home/docker/cp-test_ha-167359_ha-167359-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 cp ha-167359:/home/docker/cp-test.txt ha-167359-m04:/home/docker/cp-test_ha-167359_ha-167359-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359-m04 "sudo cat /home/docker/cp-test_ha-167359_ha-167359-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 cp testdata/cp-test.txt ha-167359-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 cp ha-167359-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2574893903/001/cp-test_ha-167359-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 cp ha-167359-m02:/home/docker/cp-test.txt ha-167359:/home/docker/cp-test_ha-167359-m02_ha-167359.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359 "sudo cat /home/docker/cp-test_ha-167359-m02_ha-167359.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 cp ha-167359-m02:/home/docker/cp-test.txt ha-167359-m03:/home/docker/cp-test_ha-167359-m02_ha-167359-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359-m03 "sudo cat /home/docker/cp-test_ha-167359-m02_ha-167359-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 cp ha-167359-m02:/home/docker/cp-test.txt ha-167359-m04:/home/docker/cp-test_ha-167359-m02_ha-167359-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359-m04 "sudo cat /home/docker/cp-test_ha-167359-m02_ha-167359-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 cp testdata/cp-test.txt ha-167359-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 cp ha-167359-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2574893903/001/cp-test_ha-167359-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 cp ha-167359-m03:/home/docker/cp-test.txt ha-167359:/home/docker/cp-test_ha-167359-m03_ha-167359.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359 "sudo cat /home/docker/cp-test_ha-167359-m03_ha-167359.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 cp ha-167359-m03:/home/docker/cp-test.txt ha-167359-m02:/home/docker/cp-test_ha-167359-m03_ha-167359-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359-m02 "sudo cat /home/docker/cp-test_ha-167359-m03_ha-167359-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 cp ha-167359-m03:/home/docker/cp-test.txt ha-167359-m04:/home/docker/cp-test_ha-167359-m03_ha-167359-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359-m04 "sudo cat /home/docker/cp-test_ha-167359-m03_ha-167359-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 cp testdata/cp-test.txt ha-167359-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 cp ha-167359-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2574893903/001/cp-test_ha-167359-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 cp ha-167359-m04:/home/docker/cp-test.txt ha-167359:/home/docker/cp-test_ha-167359-m04_ha-167359.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359 "sudo cat /home/docker/cp-test_ha-167359-m04_ha-167359.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 cp ha-167359-m04:/home/docker/cp-test.txt ha-167359-m02:/home/docker/cp-test_ha-167359-m04_ha-167359-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359-m02 "sudo cat /home/docker/cp-test_ha-167359-m04_ha-167359-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 cp ha-167359-m04:/home/docker/cp-test.txt ha-167359-m03:/home/docker/cp-test_ha-167359-m04_ha-167359-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 ssh -n ha-167359-m03 "sudo cat /home/docker/cp-test_ha-167359-m04_ha-167359-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (11.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (87.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 node stop m02 --alsologtostderr -v 5
E1221 20:24:07.333741  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:24:07.339099  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:24:07.349483  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:24:07.369917  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:24:07.410290  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:24:07.490672  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:24:07.651157  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:24:07.971399  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:24:08.612347  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:24:09.892916  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:24:12.454238  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:24:17.575424  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:24:27.816038  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:24:48.296796  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-167359 node stop m02 --alsologtostderr -v 5: (1m27.263039232s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-167359 status --alsologtostderr -v 5: exit status 7 (521.118907ms)

                                                
                                                
-- stdout --
	ha-167359
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-167359-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-167359-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-167359-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 20:25:26.438310  143247 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:25:26.438427  143247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:25:26.438432  143247 out.go:374] Setting ErrFile to fd 2...
	I1221 20:25:26.438436  143247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:25:26.438670  143247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
	I1221 20:25:26.438852  143247 out.go:368] Setting JSON to false
	I1221 20:25:26.438880  143247 mustload.go:66] Loading cluster: ha-167359
	I1221 20:25:26.438974  143247 notify.go:221] Checking for updates...
	I1221 20:25:26.439360  143247 config.go:182] Loaded profile config "ha-167359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:25:26.439387  143247 status.go:174] checking status of ha-167359 ...
	I1221 20:25:26.441301  143247 status.go:371] ha-167359 host status = "Running" (err=<nil>)
	I1221 20:25:26.441319  143247 host.go:66] Checking if "ha-167359" exists ...
	I1221 20:25:26.443979  143247 main.go:144] libmachine: domain ha-167359 has defined MAC address 52:54:00:8c:9c:4f in network mk-ha-167359
	I1221 20:25:26.444532  143247 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:9c:4f", ip: ""} in network mk-ha-167359: {Iface:virbr1 ExpiryTime:2025-12-21 21:19:30 +0000 UTC Type:0 Mac:52:54:00:8c:9c:4f Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-167359 Clientid:01:52:54:00:8c:9c:4f}
	I1221 20:25:26.444574  143247 main.go:144] libmachine: domain ha-167359 has defined IP address 192.168.39.191 and MAC address 52:54:00:8c:9c:4f in network mk-ha-167359
	I1221 20:25:26.444722  143247 host.go:66] Checking if "ha-167359" exists ...
	I1221 20:25:26.444982  143247 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:25:26.447378  143247 main.go:144] libmachine: domain ha-167359 has defined MAC address 52:54:00:8c:9c:4f in network mk-ha-167359
	I1221 20:25:26.447791  143247 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:9c:4f", ip: ""} in network mk-ha-167359: {Iface:virbr1 ExpiryTime:2025-12-21 21:19:30 +0000 UTC Type:0 Mac:52:54:00:8c:9c:4f Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-167359 Clientid:01:52:54:00:8c:9c:4f}
	I1221 20:25:26.447813  143247 main.go:144] libmachine: domain ha-167359 has defined IP address 192.168.39.191 and MAC address 52:54:00:8c:9c:4f in network mk-ha-167359
	I1221 20:25:26.448013  143247 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/ha-167359/id_rsa Username:docker}
	I1221 20:25:26.539222  143247 ssh_runner.go:195] Run: systemctl --version
	I1221 20:25:26.546150  143247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:25:26.565290  143247 kubeconfig.go:125] found "ha-167359" server: "https://192.168.39.254:8443"
	I1221 20:25:26.565327  143247 api_server.go:166] Checking apiserver status ...
	I1221 20:25:26.565365  143247 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:25:26.588385  143247 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup
	W1221 20:25:26.600141  143247 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1221 20:25:26.600210  143247 ssh_runner.go:195] Run: ls
	I1221 20:25:26.605377  143247 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1221 20:25:26.614074  143247 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1221 20:25:26.614107  143247 status.go:463] ha-167359 apiserver status = Running (err=<nil>)
	I1221 20:25:26.614118  143247 status.go:176] ha-167359 status: &{Name:ha-167359 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1221 20:25:26.614146  143247 status.go:174] checking status of ha-167359-m02 ...
	I1221 20:25:26.616046  143247 status.go:371] ha-167359-m02 host status = "Stopped" (err=<nil>)
	I1221 20:25:26.616074  143247 status.go:384] host is not running, skipping remaining checks
	I1221 20:25:26.616081  143247 status.go:176] ha-167359-m02 status: &{Name:ha-167359-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1221 20:25:26.616098  143247 status.go:174] checking status of ha-167359-m03 ...
	I1221 20:25:26.617541  143247 status.go:371] ha-167359-m03 host status = "Running" (err=<nil>)
	I1221 20:25:26.617564  143247 host.go:66] Checking if "ha-167359-m03" exists ...
	I1221 20:25:26.620320  143247 main.go:144] libmachine: domain ha-167359-m03 has defined MAC address 52:54:00:de:de:cc in network mk-ha-167359
	I1221 20:25:26.620865  143247 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:de:de:cc", ip: ""} in network mk-ha-167359: {Iface:virbr1 ExpiryTime:2025-12-21 21:21:47 +0000 UTC Type:0 Mac:52:54:00:de:de:cc Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-167359-m03 Clientid:01:52:54:00:de:de:cc}
	I1221 20:25:26.620891  143247 main.go:144] libmachine: domain ha-167359-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:de:de:cc in network mk-ha-167359
	I1221 20:25:26.621114  143247 host.go:66] Checking if "ha-167359-m03" exists ...
	I1221 20:25:26.621427  143247 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:25:26.623796  143247 main.go:144] libmachine: domain ha-167359-m03 has defined MAC address 52:54:00:de:de:cc in network mk-ha-167359
	I1221 20:25:26.624151  143247 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:de:de:cc", ip: ""} in network mk-ha-167359: {Iface:virbr1 ExpiryTime:2025-12-21 21:21:47 +0000 UTC Type:0 Mac:52:54:00:de:de:cc Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-167359-m03 Clientid:01:52:54:00:de:de:cc}
	I1221 20:25:26.624169  143247 main.go:144] libmachine: domain ha-167359-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:de:de:cc in network mk-ha-167359
	I1221 20:25:26.624295  143247 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/ha-167359-m03/id_rsa Username:docker}
	I1221 20:25:26.712799  143247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:25:26.736556  143247 kubeconfig.go:125] found "ha-167359" server: "https://192.168.39.254:8443"
	I1221 20:25:26.736611  143247 api_server.go:166] Checking apiserver status ...
	I1221 20:25:26.736650  143247 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:25:26.758456  143247 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1840/cgroup
	W1221 20:25:26.769888  143247 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1840/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1221 20:25:26.769944  143247 ssh_runner.go:195] Run: ls
	I1221 20:25:26.775020  143247 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1221 20:25:26.780120  143247 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1221 20:25:26.780146  143247 status.go:463] ha-167359-m03 apiserver status = Running (err=<nil>)
	I1221 20:25:26.780155  143247 status.go:176] ha-167359-m03 status: &{Name:ha-167359-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1221 20:25:26.780170  143247 status.go:174] checking status of ha-167359-m04 ...
	I1221 20:25:26.782066  143247 status.go:371] ha-167359-m04 host status = "Running" (err=<nil>)
	I1221 20:25:26.782090  143247 host.go:66] Checking if "ha-167359-m04" exists ...
	I1221 20:25:26.784411  143247 main.go:144] libmachine: domain ha-167359-m04 has defined MAC address 52:54:00:39:02:8e in network mk-ha-167359
	I1221 20:25:26.784808  143247 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:02:8e", ip: ""} in network mk-ha-167359: {Iface:virbr1 ExpiryTime:2025-12-21 21:23:18 +0000 UTC Type:0 Mac:52:54:00:39:02:8e Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-167359-m04 Clientid:01:52:54:00:39:02:8e}
	I1221 20:25:26.784833  143247 main.go:144] libmachine: domain ha-167359-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:39:02:8e in network mk-ha-167359
	I1221 20:25:26.784963  143247 host.go:66] Checking if "ha-167359-m04" exists ...
	I1221 20:25:26.785139  143247 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:25:26.787033  143247 main.go:144] libmachine: domain ha-167359-m04 has defined MAC address 52:54:00:39:02:8e in network mk-ha-167359
	I1221 20:25:26.787350  143247 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:02:8e", ip: ""} in network mk-ha-167359: {Iface:virbr1 ExpiryTime:2025-12-21 21:23:18 +0000 UTC Type:0 Mac:52:54:00:39:02:8e Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-167359-m04 Clientid:01:52:54:00:39:02:8e}
	I1221 20:25:26.787374  143247 main.go:144] libmachine: domain ha-167359-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:39:02:8e in network mk-ha-167359
	I1221 20:25:26.787508  143247 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/ha-167359-m04/id_rsa Username:docker}
	I1221 20:25:26.870297  143247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:25:26.889321  143247 status.go:176] ha-167359-m04 status: &{Name:ha-167359-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (87.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (31.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 node start m02 --alsologtostderr -v 5
E1221 20:25:29.257358  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-167359 node start m02 --alsologtostderr -v 5: (30.452376956s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (31.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (357.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 stop --alsologtostderr -v 5
E1221 20:26:30.707375  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:26:36.723971  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:26:51.178177  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:27:53.751073  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:28:33.674772  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:29:07.332994  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:29:35.018962  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-167359 stop --alsologtostderr -v 5: (4m1.596686844s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 start --wait true --alsologtostderr -v 5
E1221 20:31:30.704920  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-167359 start --wait true --alsologtostderr -v 5: (1m55.96105969s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (357.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-167359 node delete m03 --alsologtostderr -v 5: (17.740521469s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (256.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 stop --alsologtostderr -v 5
E1221 20:33:33.673929  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:34:07.335206  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:36:30.707410  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-167359 stop --alsologtostderr -v 5: (4m16.04957955s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-167359 status --alsologtostderr -v 5: exit status 7 (67.435825ms)

                                                
                                                
-- stdout --
	ha-167359
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-167359-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-167359-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 20:36:32.380918  146420 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:36:32.381144  146420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:36:32.381154  146420 out.go:374] Setting ErrFile to fd 2...
	I1221 20:36:32.381159  146420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:36:32.381376  146420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
	I1221 20:36:32.381578  146420 out.go:368] Setting JSON to false
	I1221 20:36:32.381611  146420 mustload.go:66] Loading cluster: ha-167359
	I1221 20:36:32.381692  146420 notify.go:221] Checking for updates...
	I1221 20:36:32.382128  146420 config.go:182] Loaded profile config "ha-167359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:36:32.382155  146420 status.go:174] checking status of ha-167359 ...
	I1221 20:36:32.384625  146420 status.go:371] ha-167359 host status = "Stopped" (err=<nil>)
	I1221 20:36:32.384643  146420 status.go:384] host is not running, skipping remaining checks
	I1221 20:36:32.384647  146420 status.go:176] ha-167359 status: &{Name:ha-167359 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1221 20:36:32.384664  146420 status.go:174] checking status of ha-167359-m02 ...
	I1221 20:36:32.385822  146420 status.go:371] ha-167359-m02 host status = "Stopped" (err=<nil>)
	I1221 20:36:32.385836  146420 status.go:384] host is not running, skipping remaining checks
	I1221 20:36:32.385840  146420 status.go:176] ha-167359-m02 status: &{Name:ha-167359-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1221 20:36:32.385851  146420 status.go:174] checking status of ha-167359-m04 ...
	I1221 20:36:32.386908  146420 status.go:371] ha-167359-m04 host status = "Stopped" (err=<nil>)
	I1221 20:36:32.386922  146420 status.go:384] host is not running, skipping remaining checks
	I1221 20:36:32.386926  146420 status.go:176] ha-167359-m04 status: &{Name:ha-167359-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (256.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (98.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-167359 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m37.958492292s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (98.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 node add --control-plane --alsologtostderr -v 5
E1221 20:38:33.673885  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:39:07.332747  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-167359 node add --control-plane --alsologtostderr -v 5: (1m11.687571617s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-167359 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.67s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.46s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-941607 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1221 20:40:30.379870  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-941607 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m18.464102841s)
--- PASS: TestJSONOutput/start/Command (78.46s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-941607 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-941607 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.16s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-941607 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-941607 --output=json --user=testUser: (7.161386639s)
--- PASS: TestJSONOutput/stop/Command (7.16s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-677538 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-677538 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (78.016801ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"779ae786-a168-4194-a69b-a8927e54301e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-677538] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1d3ee165-f8bd-4c3d-be8f-b463060b328b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22179"}}
	{"specversion":"1.0","id":"f7ff05ab-677e-4321-802c-9eefd8664c08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0a882c75-0288-4004-ab5c-ffd757f02ce4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig"}}
	{"specversion":"1.0","id":"1a88e8c7-83cf-4996-af3e-a16a7722592d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube"}}
	{"specversion":"1.0","id":"f1460726-6c56-4c73-ae38-424899577e6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"63f51498-0d2a-46ff-bb07-9adeebad8558","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b4b2de3d-70e4-44fc-9263-3b0d6f038ff4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-677538" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-677538
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (76.1s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-012848 --driver=kvm2  --container-runtime=crio
E1221 20:41:30.705630  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-012848 --driver=kvm2  --container-runtime=crio: (35.883025177s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-015587 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-015587 --driver=kvm2  --container-runtime=crio: (37.610163193s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-012848
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-015587
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-015587" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-015587
helpers_test.go:176: Cleaning up "first-012848" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-012848
--- PASS: TestMinikubeProfile (76.10s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-930715 --memory=3072 --mount-string /tmp/TestMountStartserial3693620475/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-930715 --memory=3072 --mount-string /tmp/TestMountStartserial3693620475/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.094697687s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-930715 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-930715 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (19.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-952881 --memory=3072 --mount-string /tmp/TestMountStartserial3693620475/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-952881 --memory=3072 --mount-string /tmp/TestMountStartserial3693620475/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.490898833s)
--- PASS: TestMountStart/serial/StartWithMountSecond (19.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-952881 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-952881 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-930715 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-952881 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-952881 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-952881
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-952881: (1.262236461s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (17.66s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-952881
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-952881: (16.658823411s)
--- PASS: TestMountStart/serial/RestartStopped (17.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-952881 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-952881 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (95.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-052488 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1221 20:43:16.725054  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:43:33.673781  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:44:07.332780  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:44:33.752001  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-052488 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m35.494661444s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (95.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052488 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052488 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-052488 -- rollout status deployment/busybox: (3.647526744s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052488 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052488 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052488 -- exec busybox-7b57f96db7-kpxh5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052488 -- exec busybox-7b57f96db7-mtd2s -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052488 -- exec busybox-7b57f96db7-kpxh5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052488 -- exec busybox-7b57f96db7-mtd2s -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052488 -- exec busybox-7b57f96db7-kpxh5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052488 -- exec busybox-7b57f96db7-mtd2s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.22s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052488 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052488 -- exec busybox-7b57f96db7-kpxh5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052488 -- exec busybox-7b57f96db7-kpxh5 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052488 -- exec busybox-7b57f96db7-mtd2s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052488 -- exec busybox-7b57f96db7-mtd2s -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (39.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-052488 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-052488 -v=5 --alsologtostderr: (38.934958649s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (39.38s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-052488 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.45s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 cp testdata/cp-test.txt multinode-052488:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 ssh -n multinode-052488 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 cp multinode-052488:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4180057568/001/cp-test_multinode-052488.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 ssh -n multinode-052488 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 cp multinode-052488:/home/docker/cp-test.txt multinode-052488-m02:/home/docker/cp-test_multinode-052488_multinode-052488-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 ssh -n multinode-052488 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 ssh -n multinode-052488-m02 "sudo cat /home/docker/cp-test_multinode-052488_multinode-052488-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 cp multinode-052488:/home/docker/cp-test.txt multinode-052488-m03:/home/docker/cp-test_multinode-052488_multinode-052488-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 ssh -n multinode-052488 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 ssh -n multinode-052488-m03 "sudo cat /home/docker/cp-test_multinode-052488_multinode-052488-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 cp testdata/cp-test.txt multinode-052488-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 ssh -n multinode-052488-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 cp multinode-052488-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4180057568/001/cp-test_multinode-052488-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 ssh -n multinode-052488-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 cp multinode-052488-m02:/home/docker/cp-test.txt multinode-052488:/home/docker/cp-test_multinode-052488-m02_multinode-052488.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 ssh -n multinode-052488-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 ssh -n multinode-052488 "sudo cat /home/docker/cp-test_multinode-052488-m02_multinode-052488.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 cp multinode-052488-m02:/home/docker/cp-test.txt multinode-052488-m03:/home/docker/cp-test_multinode-052488-m02_multinode-052488-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 ssh -n multinode-052488-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 ssh -n multinode-052488-m03 "sudo cat /home/docker/cp-test_multinode-052488-m02_multinode-052488-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 cp testdata/cp-test.txt multinode-052488-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 ssh -n multinode-052488-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 cp multinode-052488-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4180057568/001/cp-test_multinode-052488-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 ssh -n multinode-052488-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 cp multinode-052488-m03:/home/docker/cp-test.txt multinode-052488:/home/docker/cp-test_multinode-052488-m03_multinode-052488.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 ssh -n multinode-052488-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 ssh -n multinode-052488 "sudo cat /home/docker/cp-test_multinode-052488-m03_multinode-052488.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 cp multinode-052488-m03:/home/docker/cp-test.txt multinode-052488-m02:/home/docker/cp-test_multinode-052488-m03_multinode-052488-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 ssh -n multinode-052488-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 ssh -n multinode-052488-m02 "sudo cat /home/docker/cp-test_multinode-052488-m03_multinode-052488-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.01s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-052488 node stop m03: (1.519198017s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-052488 status: exit status 7 (323.305831ms)

                                                
                                                
-- stdout --
	multinode-052488
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-052488-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-052488-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-052488 status --alsologtostderr: exit status 7 (325.364058ms)

                                                
                                                
-- stdout --
	multinode-052488
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-052488-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-052488-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 20:45:42.561918  151942 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:45:42.562191  151942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:45:42.562201  151942 out.go:374] Setting ErrFile to fd 2...
	I1221 20:45:42.562205  151942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:45:42.562403  151942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
	I1221 20:45:42.562584  151942 out.go:368] Setting JSON to false
	I1221 20:45:42.562617  151942 mustload.go:66] Loading cluster: multinode-052488
	I1221 20:45:42.562749  151942 notify.go:221] Checking for updates...
	I1221 20:45:42.563138  151942 config.go:182] Loaded profile config "multinode-052488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:45:42.563168  151942 status.go:174] checking status of multinode-052488 ...
	I1221 20:45:42.565559  151942 status.go:371] multinode-052488 host status = "Running" (err=<nil>)
	I1221 20:45:42.565576  151942 host.go:66] Checking if "multinode-052488" exists ...
	I1221 20:45:42.568074  151942 main.go:144] libmachine: domain multinode-052488 has defined MAC address 52:54:00:b7:0a:48 in network mk-multinode-052488
	I1221 20:45:42.568530  151942 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b7:0a:48", ip: ""} in network mk-multinode-052488: {Iface:virbr1 ExpiryTime:2025-12-21 21:43:28 +0000 UTC Type:0 Mac:52:54:00:b7:0a:48 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:multinode-052488 Clientid:01:52:54:00:b7:0a:48}
	I1221 20:45:42.568580  151942 main.go:144] libmachine: domain multinode-052488 has defined IP address 192.168.39.53 and MAC address 52:54:00:b7:0a:48 in network mk-multinode-052488
	I1221 20:45:42.568718  151942 host.go:66] Checking if "multinode-052488" exists ...
	I1221 20:45:42.568949  151942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:45:42.571400  151942 main.go:144] libmachine: domain multinode-052488 has defined MAC address 52:54:00:b7:0a:48 in network mk-multinode-052488
	I1221 20:45:42.571842  151942 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b7:0a:48", ip: ""} in network mk-multinode-052488: {Iface:virbr1 ExpiryTime:2025-12-21 21:43:28 +0000 UTC Type:0 Mac:52:54:00:b7:0a:48 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:multinode-052488 Clientid:01:52:54:00:b7:0a:48}
	I1221 20:45:42.571875  151942 main.go:144] libmachine: domain multinode-052488 has defined IP address 192.168.39.53 and MAC address 52:54:00:b7:0a:48 in network mk-multinode-052488
	I1221 20:45:42.572049  151942 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/multinode-052488/id_rsa Username:docker}
	I1221 20:45:42.656709  151942 ssh_runner.go:195] Run: systemctl --version
	I1221 20:45:42.662814  151942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:45:42.680501  151942 kubeconfig.go:125] found "multinode-052488" server: "https://192.168.39.53:8443"
	I1221 20:45:42.680540  151942 api_server.go:166] Checking apiserver status ...
	I1221 20:45:42.680590  151942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:45:42.699974  151942 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1398/cgroup
	W1221 20:45:42.711311  151942 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1398/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1221 20:45:42.711372  151942 ssh_runner.go:195] Run: ls
	I1221 20:45:42.716256  151942 api_server.go:253] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I1221 20:45:42.720774  151942 api_server.go:279] https://192.168.39.53:8443/healthz returned 200:
	ok
	I1221 20:45:42.720795  151942 status.go:463] multinode-052488 apiserver status = Running (err=<nil>)
	I1221 20:45:42.720806  151942 status.go:176] multinode-052488 status: &{Name:multinode-052488 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1221 20:45:42.720827  151942 status.go:174] checking status of multinode-052488-m02 ...
	I1221 20:45:42.722502  151942 status.go:371] multinode-052488-m02 host status = "Running" (err=<nil>)
	I1221 20:45:42.722519  151942 host.go:66] Checking if "multinode-052488-m02" exists ...
	I1221 20:45:42.724928  151942 main.go:144] libmachine: domain multinode-052488-m02 has defined MAC address 52:54:00:f7:0b:28 in network mk-multinode-052488
	I1221 20:45:42.725308  151942 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f7:0b:28", ip: ""} in network mk-multinode-052488: {Iface:virbr1 ExpiryTime:2025-12-21 21:44:21 +0000 UTC Type:0 Mac:52:54:00:f7:0b:28 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-052488-m02 Clientid:01:52:54:00:f7:0b:28}
	I1221 20:45:42.725329  151942 main.go:144] libmachine: domain multinode-052488-m02 has defined IP address 192.168.39.100 and MAC address 52:54:00:f7:0b:28 in network mk-multinode-052488
	I1221 20:45:42.725462  151942 host.go:66] Checking if "multinode-052488-m02" exists ...
	I1221 20:45:42.725689  151942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:45:42.727748  151942 main.go:144] libmachine: domain multinode-052488-m02 has defined MAC address 52:54:00:f7:0b:28 in network mk-multinode-052488
	I1221 20:45:42.728128  151942 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f7:0b:28", ip: ""} in network mk-multinode-052488: {Iface:virbr1 ExpiryTime:2025-12-21 21:44:21 +0000 UTC Type:0 Mac:52:54:00:f7:0b:28 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-052488-m02 Clientid:01:52:54:00:f7:0b:28}
	I1221 20:45:42.728160  151942 main.go:144] libmachine: domain multinode-052488-m02 has defined IP address 192.168.39.100 and MAC address 52:54:00:f7:0b:28 in network mk-multinode-052488
	I1221 20:45:42.728295  151942 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22179-122429/.minikube/machines/multinode-052488-m02/id_rsa Username:docker}
	I1221 20:45:42.808419  151942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:45:42.824598  151942 status.go:176] multinode-052488-m02 status: &{Name:multinode-052488-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1221 20:45:42.824649  151942 status.go:174] checking status of multinode-052488-m03 ...
	I1221 20:45:42.826558  151942 status.go:371] multinode-052488-m03 host status = "Stopped" (err=<nil>)
	I1221 20:45:42.826591  151942 status.go:384] host is not running, skipping remaining checks
	I1221 20:45:42.826600  151942 status.go:176] multinode-052488-m03 status: &{Name:multinode-052488-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-052488 node start m03 -v=5 --alsologtostderr: (37.719311869s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (299.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-052488
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-052488
E1221 20:46:30.707042  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:48:33.675327  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:49:07.335721  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-052488: (2m59.606053031s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-052488 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-052488 --wait=true -v=5 --alsologtostderr: (2m0.111068531s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-052488
--- PASS: TestMultiNode/serial/RestartKeepsNodes (299.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-052488 node delete m03: (2.112040233s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.56s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (152.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 stop
E1221 20:51:30.706868  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:53:33.675169  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-052488 stop: (2m32.712237188s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-052488 status: exit status 7 (67.044778ms)

                                                
                                                
-- stdout --
	multinode-052488
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-052488-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-052488 status --alsologtostderr: exit status 7 (64.394707ms)

                                                
                                                
-- stdout --
	multinode-052488
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-052488-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 20:53:56.295299  154307 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:53:56.295612  154307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:53:56.295623  154307 out.go:374] Setting ErrFile to fd 2...
	I1221 20:53:56.295627  154307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:53:56.295865  154307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
	I1221 20:53:56.296090  154307 out.go:368] Setting JSON to false
	I1221 20:53:56.296124  154307 mustload.go:66] Loading cluster: multinode-052488
	I1221 20:53:56.296190  154307 notify.go:221] Checking for updates...
	I1221 20:53:56.296622  154307 config.go:182] Loaded profile config "multinode-052488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:53:56.296649  154307 status.go:174] checking status of multinode-052488 ...
	I1221 20:53:56.298868  154307 status.go:371] multinode-052488 host status = "Stopped" (err=<nil>)
	I1221 20:53:56.298884  154307 status.go:384] host is not running, skipping remaining checks
	I1221 20:53:56.298889  154307 status.go:176] multinode-052488 status: &{Name:multinode-052488 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1221 20:53:56.298906  154307 status.go:174] checking status of multinode-052488-m02 ...
	I1221 20:53:56.300327  154307 status.go:371] multinode-052488-m02 host status = "Stopped" (err=<nil>)
	I1221 20:53:56.300343  154307 status.go:384] host is not running, skipping remaining checks
	I1221 20:53:56.300348  154307 status.go:176] multinode-052488-m02 status: &{Name:multinode-052488-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (152.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (85.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-052488 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1221 20:54:07.332910  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-052488 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m24.755396513s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052488 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (85.23s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-052488
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-052488-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-052488-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (80.789777ms)

                                                
                                                
-- stdout --
	* [multinode-052488-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-052488-m02' is duplicated with machine name 'multinode-052488-m02' in profile 'multinode-052488'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-052488-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-052488-m03 --driver=kvm2  --container-runtime=crio: (38.716450332s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-052488
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-052488: exit status 80 (204.809327ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-052488 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-052488-m03 already exists in multinode-052488-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-052488-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.93s)

                                                
                                    
x
+
TestScheduledStopUnix (108.35s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-579649 --memory=3072 --driver=kvm2  --container-runtime=crio
E1221 20:56:30.706852  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-579649 --memory=3072 --driver=kvm2  --container-runtime=crio: (36.690577922s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-579649 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1221 20:56:39.864483  155660 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:56:39.864786  155660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:56:39.864798  155660 out.go:374] Setting ErrFile to fd 2...
	I1221 20:56:39.864805  155660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:56:39.865010  155660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
	I1221 20:56:39.865290  155660 out.go:368] Setting JSON to false
	I1221 20:56:39.865400  155660 mustload.go:66] Loading cluster: scheduled-stop-579649
	I1221 20:56:39.865724  155660 config.go:182] Loaded profile config "scheduled-stop-579649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:56:39.865809  155660 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/scheduled-stop-579649/config.json ...
	I1221 20:56:39.866008  155660 mustload.go:66] Loading cluster: scheduled-stop-579649
	I1221 20:56:39.866138  155660 config.go:182] Loaded profile config "scheduled-stop-579649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-579649 -n scheduled-stop-579649
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-579649 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1221 20:56:40.157318  155705 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:56:40.157628  155705 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:56:40.157639  155705 out.go:374] Setting ErrFile to fd 2...
	I1221 20:56:40.157643  155705 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:56:40.157863  155705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
	I1221 20:56:40.158136  155705 out.go:368] Setting JSON to false
	I1221 20:56:40.158372  155705 daemonize_unix.go:73] killing process 155694 as it is an old scheduled stop
	I1221 20:56:40.158500  155705 mustload.go:66] Loading cluster: scheduled-stop-579649
	I1221 20:56:40.158910  155705 config.go:182] Loaded profile config "scheduled-stop-579649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:56:40.159011  155705 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/scheduled-stop-579649/config.json ...
	I1221 20:56:40.159225  155705 mustload.go:66] Loading cluster: scheduled-stop-579649
	I1221 20:56:40.159370  155705 config.go:182] Loaded profile config "scheduled-stop-579649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1221 20:56:40.164062  126345 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/scheduled-stop-579649/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-579649 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-579649 -n scheduled-stop-579649
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-579649
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-579649 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1221 20:57:05.895737  155869 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:57:05.895996  155869 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:57:05.896005  155869 out.go:374] Setting ErrFile to fd 2...
	I1221 20:57:05.896009  155869 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:57:05.896178  155869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
	I1221 20:57:05.896427  155869 out.go:368] Setting JSON to false
	I1221 20:57:05.896527  155869 mustload.go:66] Loading cluster: scheduled-stop-579649
	I1221 20:57:05.896804  155869 config.go:182] Loaded profile config "scheduled-stop-579649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:57:05.896865  155869 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/scheduled-stop-579649/config.json ...
	I1221 20:57:05.897070  155869 mustload.go:66] Loading cluster: scheduled-stop-579649
	I1221 20:57:05.897163  155869 config.go:182] Loaded profile config "scheduled-stop-579649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
E1221 20:57:10.382808  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-579649
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-579649: exit status 7 (64.587531ms)

                                                
                                                
-- stdout --
	scheduled-stop-579649
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-579649 -n scheduled-stop-579649
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-579649 -n scheduled-stop-579649: exit status 7 (63.178849ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-579649" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-579649
--- PASS: TestScheduledStopUnix (108.35s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (401.63s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2046893904 start -p running-upgrade-787082 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2046893904 start -p running-upgrade-787082 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m41.482188866s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-787082 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-787082 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (4m58.374519948s)
helpers_test.go:176: Cleaning up "running-upgrade-787082" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-787082
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-787082: (1.094969802s)
--- PASS: TestRunningBinaryUpgrade (401.63s)

                                                
                                    
x
+
TestKubernetesUpgrade (177.8s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-854622 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-854622 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.523541863s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-854622
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-854622: (1.853822841s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-854622 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-854622 status --format={{.Host}}: exit status 7 (66.119561ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-854622 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1221 20:58:33.673381  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:59:07.333613  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-854622 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m13.65445984s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-854622 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-854622 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-854622 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (78.338288ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-854622] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-rc.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-854622
	    minikube start -p kubernetes-upgrade-854622 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8546222 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-854622 --kubernetes-version=v1.35.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-854622 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1221 20:59:56.727711  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-854622 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m1.580950456s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-854622" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-854622
--- PASS: TestKubernetesUpgrade (177.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-747549 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-747549 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (95.497015ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-747549] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (84.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-747549 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-747549 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m23.954900607s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-747549 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (84.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (25.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-747549 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-747549 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (24.725120717s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-747549 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-747549 status -o json: exit status 2 (202.383976ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-747549","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-747549
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (25.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (87.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1590112384 start -p stopped-upgrade-251772 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1590112384 start -p stopped-upgrade-251772 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (47.657740762s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1590112384 -p stopped-upgrade-251772 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1590112384 -p stopped-upgrade-251772 stop: (1.70473549s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-251772 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-251772 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (38.065056009s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (87.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (50.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-747549 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-747549 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (50.146993002s)
--- PASS: TestNoKubernetes/serial/Start (50.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22179-122429/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-747549 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-747549 "sudo systemctl is-active --quiet service kubelet": exit status 1 (163.536576ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-747549
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-747549: (1.251526003s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (35.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-747549 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-747549 --driver=kvm2  --container-runtime=crio: (35.205724839s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (35.21s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (126.15s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-759510 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-759510 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m57.411360864s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-759510 image pull public.ecr.aws/docker/library/busybox:latest
preload_test.go:56: (dbg) Done: out/minikube-linux-amd64 -p test-preload-759510 image pull public.ecr.aws/docker/library/busybox:latest: (1.5503549s)
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-759510
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-759510: (7.191520109s)
--- PASS: TestPreload/Start-NoPreload-PullImage (126.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-251772
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-251772: (1.201917677s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-747549 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-747549 "sudo systemctl is-active --quiet service kubelet": exit status 1 (170.320825ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-340687 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-340687 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (109.241774ms)

                                                
                                                
-- stdout --
	* [false-340687] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 21:01:14.179693  159721 out.go:360] Setting OutFile to fd 1 ...
	I1221 21:01:14.179834  159721 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 21:01:14.179849  159721 out.go:374] Setting ErrFile to fd 2...
	I1221 21:01:14.179856  159721 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 21:01:14.180042  159721 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-122429/.minikube/bin
	I1221 21:01:14.180549  159721 out.go:368] Setting JSON to false
	I1221 21:01:14.181355  159721 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":17024,"bootTime":1766333850,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 21:01:14.181410  159721 start.go:143] virtualization: kvm guest
	I1221 21:01:14.183517  159721 out.go:179] * [false-340687] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 21:01:14.185044  159721 notify.go:221] Checking for updates...
	I1221 21:01:14.185060  159721 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 21:01:14.186365  159721 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 21:01:14.187834  159721 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-122429/kubeconfig
	I1221 21:01:14.189199  159721 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-122429/.minikube
	I1221 21:01:14.190522  159721 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 21:01:14.191837  159721 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 21:01:14.193619  159721 config.go:182] Loaded profile config "force-systemd-env-764266": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 21:01:14.193718  159721 config.go:182] Loaded profile config "running-upgrade-787082": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1221 21:01:14.193788  159721 config.go:182] Loaded profile config "test-preload-759510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 21:01:14.193873  159721 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 21:01:14.224530  159721 out.go:179] * Using the kvm2 driver based on user configuration
	I1221 21:01:14.225703  159721 start.go:309] selected driver: kvm2
	I1221 21:01:14.225718  159721 start.go:928] validating driver "kvm2" against <nil>
	I1221 21:01:14.225728  159721 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 21:01:14.227325  159721 out.go:203] 
	W1221 21:01:14.228512  159721 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1221 21:01:14.229686  159721 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-340687 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-340687

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-340687

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-340687

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-340687

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-340687

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-340687

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-340687

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-340687

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-340687

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-340687

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-340687

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-340687" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-340687" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22179-122429/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 21 Dec 2025 20:59:53 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.202:8443
name: running-upgrade-787082
contexts:
- context:
cluster: running-upgrade-787082
user: running-upgrade-787082
name: running-upgrade-787082
current-context: ""
kind: Config
users:
- name: running-upgrade-787082
user:
client-certificate: /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/running-upgrade-787082/client.crt
client-key: /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/running-upgrade-787082/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-340687

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-340687"

                                                
                                                
----------------------- debugLogs end: false-340687 [took: 3.328317855s] --------------------------------
helpers_test.go:176: Cleaning up "false-340687" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-340687
--- PASS: TestNetworkPlugins/group/false (3.63s)

                                                
                                    
x
+
TestISOImage/Setup (50.18s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-667849 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1221 21:01:30.704683  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-667849 --no-kubernetes --driver=kvm2  --container-runtime=crio: (50.177961727s)
--- PASS: TestISOImage/Setup (50.18s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-667849 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-667849 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-667849 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-667849 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-667849 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-667849 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-667849 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-667849 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-667849 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-667849 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-667849 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.17s)

                                                
                                    
x
+
TestPause/serial/Start (80.41s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-471447 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E1221 21:03:33.673317  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-471447 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m20.410834687s)
--- PASS: TestPause/serial/Start (80.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (91.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-458928 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-458928 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m31.739556234s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (91.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (105.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-419917 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-419917 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (1m45.112220118s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (105.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-458928 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [fd9e6ecb-9c76-4241-afea-420299d68f29] Pending
helpers_test.go:353: "busybox" [fd9e6ecb-9c76-4241-afea-420299d68f29] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [fd9e6ecb-9c76-4241-afea-420299d68f29] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004294668s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-458928 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (85.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-725192 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-725192 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3: (1m25.726620708s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (85.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-458928 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-458928 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.537176526s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-458928 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (83.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-458928 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-458928 --alsologtostderr -v=3: (1m23.107691132s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (83.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-419917 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [f04ba421-bf8a-4eb9-a7a8-ab69b4ae8295] Pending
helpers_test.go:353: "busybox" [f04ba421-bf8a-4eb9-a7a8-ab69b4ae8295] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [f04ba421-bf8a-4eb9-a7a8-ab69b4ae8295] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004975917s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-419917 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-419917 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-419917 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (90.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-419917 --alsologtostderr -v=3
E1221 21:06:30.704936  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-419917 --alsologtostderr -v=3: (1m30.301835666s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (90.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-725192 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0f900300-a980-4c88-8c5c-ed2c277d4841] Pending
helpers_test.go:353: "busybox" [0f900300-a980-4c88-8c5c-ed2c277d4841] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0f900300-a980-4c88-8c5c-ed2c277d4841] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003661854s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-725192 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-458928 -n old-k8s-version-458928
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-458928 -n old-k8s-version-458928: exit status 7 (63.1928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-458928 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (45.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-458928 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-458928 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (45.141608015s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-458928 -n old-k8s-version-458928
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (45.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-725192 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-725192 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (71.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-725192 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-725192 --alsologtostderr -v=3: (1m11.048042851s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (71.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-419917 -n no-preload-419917
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-419917 -n no-preload-419917: exit status 7 (76.937905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-419917 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (53.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-419917 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-419917 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (53.025206154s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-419917 -n no-preload-419917
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (53.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-ztrkv" [9f75b844-f830-4eb4-b04c-c9fa6b662990] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-ztrkv" [9f75b844-f830-4eb4-b04c-c9fa6b662990] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.004279261s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-ztrkv" [9f75b844-f830-4eb4-b04c-c9fa6b662990] Running
E1221 21:08:33.673820  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00492489s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-458928 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-458928 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-458928 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-458928 -n old-k8s-version-458928
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-458928 -n old-k8s-version-458928: exit status 2 (241.646589ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-458928 -n old-k8s-version-458928
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-458928 -n old-k8s-version-458928: exit status 2 (259.655424ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-458928 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-458928 -n old-k8s-version-458928
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-458928 -n old-k8s-version-458928
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-665515 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-665515 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3: (1m22.929037674s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-725192 -n embed-certs-725192
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-725192 -n embed-certs-725192: exit status 7 (81.188213ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-725192 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (55.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-725192 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-725192 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3: (54.831378042s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-725192 -n embed-certs-725192
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (55.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-z6t4x" [5fc67ee1-1034-40e4-8e61-9532de7c4ef5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00398672s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-z6t4x" [5fc67ee1-1034-40e4-8e61-9532de7c4ef5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005888146s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-419917 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-419917 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-419917 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-419917 -n no-preload-419917
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-419917 -n no-preload-419917: exit status 2 (272.842452ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-419917 -n no-preload-419917
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-419917 -n no-preload-419917: exit status 2 (274.558443ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-419917 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-419917 -n no-preload-419917
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-419917 -n no-preload-419917
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (54.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-637254 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-637254 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (54.788791535s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (54.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-xlznf" [87d04017-5f69-4796-b85c-71277cb76e0c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-xlznf" [87d04017-5f69-4796-b85c-71277cb76e0c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.00422603s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-xlznf" [87d04017-5f69-4796-b85c-71277cb76e0c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004279128s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-725192 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-725192 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-725192 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-725192 -n embed-certs-725192
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-725192 -n embed-certs-725192: exit status 2 (254.046981ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-725192 -n embed-certs-725192
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-725192 -n embed-certs-725192: exit status 2 (239.248359ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-725192 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-725192 -n embed-certs-725192
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-725192 -n embed-certs-725192
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-637254 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-637254 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.192094756s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-665515 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [2b9fc1bc-3a2f-45cc-b258-61fec6dea190] Pending
helpers_test.go:353: "busybox" [2b9fc1bc-3a2f-45cc-b258-61fec6dea190] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [2b9fc1bc-3a2f-45cc-b258-61fec6dea190] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005168795s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-665515 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-637254 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-637254 --alsologtostderr -v=3: (8.034986042s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (82.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-340687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-340687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m22.287254118s)
--- PASS: TestNetworkPlugins/group/auto/Start (82.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-637254 -n newest-cni-637254
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-637254 -n newest-cni-637254: exit status 7 (63.553993ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-637254 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (48.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-637254 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-637254 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (48.60741079s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-637254 -n newest-cni-637254
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (48.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-665515 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-665515 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (84.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-665515 --alsologtostderr -v=3
E1221 21:10:56.996721  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/old-k8s-version-458928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:10:57.002029  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/old-k8s-version-458928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:10:57.012343  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/old-k8s-version-458928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:10:57.032646  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/old-k8s-version-458928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:10:57.072843  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/old-k8s-version-458928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:10:57.153270  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/old-k8s-version-458928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:10:57.313717  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/old-k8s-version-458928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:10:57.634217  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/old-k8s-version-458928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:10:58.275373  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/old-k8s-version-458928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:10:59.556472  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/old-k8s-version-458928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-665515 --alsologtostderr -v=3: (1m24.675361864s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (84.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-637254 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-637254 --alsologtostderr -v=1
E1221 21:11:02.117045  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/old-k8s-version-458928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-637254 -n newest-cni-637254
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-637254 -n newest-cni-637254: exit status 2 (278.89741ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-637254 -n newest-cni-637254
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-637254 -n newest-cni-637254: exit status 2 (275.742023ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-637254 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-637254 -n newest-cni-637254
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-637254 -n newest-cni-637254
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (63.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-340687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E1221 21:11:07.237849  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/old-k8s-version-458928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:11:17.478665  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/old-k8s-version-458928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:11:18.369699  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/no-preload-419917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:11:18.375133  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/no-preload-419917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:11:18.385571  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/no-preload-419917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:11:18.406032  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/no-preload-419917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:11:18.446397  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/no-preload-419917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:11:18.527607  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/no-preload-419917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:11:18.688024  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/no-preload-419917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:11:19.008802  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/no-preload-419917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:11:19.649772  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/no-preload-419917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:11:20.930703  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/no-preload-419917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:11:23.491872  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/no-preload-419917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-340687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m3.978220858s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (63.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-340687 "pgrep -a kubelet"
I1221 21:11:27.156328  126345 config.go:182] Loaded profile config "auto-340687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-340687 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-hh6qk" [0cec81b6-9cdf-49f2-95af-977515d838ad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1221 21:11:28.612119  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/no-preload-419917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:11:30.704665  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-hh6qk" [0cec81b6-9cdf-49f2-95af-977515d838ad] Running
E1221 21:11:37.959621  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/old-k8s-version-458928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004558932s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-340687 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-340687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-340687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1221 21:11:38.853367  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/no-preload-419917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-665515 -n default-k8s-diff-port-665515
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-665515 -n default-k8s-diff-port-665515: exit status 7 (73.31537ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-665515 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-665515 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-665515 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3: (49.362805806s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-665515 -n default-k8s-diff-port-665515
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (76.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-340687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E1221 21:11:59.334452  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/no-preload-419917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-340687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m16.639573039s)
--- PASS: TestNetworkPlugins/group/calico/Start (76.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-z5pbw" [b904012b-ebc2-46e0-a0be-dc46ef914abe] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00424833s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-340687 "pgrep -a kubelet"
I1221 21:12:16.251366  126345 config.go:182] Loaded profile config "kindnet-340687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-340687 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-lc8jc" [ad6210e7-748c-49b2-b819-27926b10f68f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1221 21:12:18.920682  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/old-k8s-version-458928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-lc8jc" [ad6210e7-748c-49b2-b819-27926b10f68f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005025543s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-340687 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-340687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-340687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-mp8sm" [6b77d27e-7e74-40ef-9439-37d7fb5ae268] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-mp8sm" [6b77d27e-7e74-40ef-9439-37d7fb5ae268] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004265065s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-mp8sm" [6b77d27e-7e74-40ef-9439-37d7fb5ae268] Running
E1221 21:12:40.295176  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/no-preload-419917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004076391s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-665515 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-340687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-340687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m13.187574285s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-665515 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-665515 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-665515 --alsologtostderr -v=1: (1.00459359s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-665515 -n default-k8s-diff-port-665515
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-665515 -n default-k8s-diff-port-665515: exit status 2 (259.767003ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-665515 -n default-k8s-diff-port-665515
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-665515 -n default-k8s-diff-port-665515: exit status 2 (266.053333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-665515 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-665515 -n default-k8s-diff-port-665515
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-665515 -n default-k8s-diff-port-665515
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (95.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-340687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-340687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m35.007859554s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (95.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-gqg4l" [973b6bb5-79bd-416b-bc80-33e92f05e6f0] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-gqg4l" [973b6bb5-79bd-416b-bc80-33e92f05e6f0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006712371s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-340687 "pgrep -a kubelet"
I1221 21:13:16.477890  126345 config.go:182] Loaded profile config "calico-340687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-340687 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-g9p7c" [02c510ed-6e04-457d-a9f3-e1b0adb87ff6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-g9p7c" [02c510ed-6e04-457d-a9f3-e1b0adb87ff6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005855399s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-340687 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-340687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-340687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (67.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-340687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1221 21:13:50.383086  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-340687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m7.417941779s)
--- PASS: TestNetworkPlugins/group/flannel/Start (67.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-340687 "pgrep -a kubelet"
I1221 21:13:55.004095  126345 config.go:182] Loaded profile config "custom-flannel-340687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-340687 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-qh794" [cca57611-273d-4157-9e26-8f3afbbd2a5c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-qh794" [cca57611-273d-4157-9e26-8f3afbbd2a5c] Running
E1221 21:14:02.215476  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/no-preload-419917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005224156s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-340687 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-340687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-340687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (85.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-340687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-340687 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m25.507481056s)
--- PASS: TestNetworkPlugins/group/bridge/Start (85.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-340687 "pgrep -a kubelet"
I1221 21:14:25.000649  126345 config.go:182] Loaded profile config "enable-default-cni-340687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-340687 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-ngxz8" [6237635e-9ce6-459e-bd25-c7c6fffdf40a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-ngxz8" [6237635e-9ce6-459e-bd25-c7c6fffdf40a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005322505s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-340687 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-340687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-340687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-hx8fk" [80a3d570-a7a7-4bff-8d84-95eb12a96778] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006037151s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (4.08s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-879606 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=kvm2  --container-runtime=crio
preload_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-gcs-879606 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=kvm2  --container-runtime=crio: (3.92911444s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-879606" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-879606
--- PASS: TestPreload/PreloadSrc/gcs (4.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-340687 "pgrep -a kubelet"
I1221 21:14:56.658386  126345 config.go:182] Loaded profile config "flannel-340687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-340687 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-jtt8b" [39bc44d9-5253-4d36-9b90-dda7958541b8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-jtt8b" [39bc44d9-5253-4d36-9b90-dda7958541b8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.008192692s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (5.21s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-github-611419 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=kvm2  --container-runtime=crio
preload_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-github-611419 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=kvm2  --container-runtime=crio: (4.940880797s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-611419" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-github-611419
--- PASS: TestPreload/PreloadSrc/github (5.21s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.6s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-cached-451655 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=kvm2  --container-runtime=crio
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-451655" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-cached-451655
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.60s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-667849 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-667849 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
E1221 21:15:04.308104  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/default-k8s-diff-port-665515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:15:04.313453  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/default-k8s-diff-port-665515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:15:04.323765  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/default-k8s-diff-port-665515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:15:04.344047  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/default-k8s-diff-port-665515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:15:04.384506  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/default-k8s-diff-port-665515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-667849 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-667849 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-667849 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-667849 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-667849 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.17s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.17s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-667849 ssh "cat /version.json"
E1221 21:15:04.465510  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/default-k8s-diff-port-665515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   iso_version: v1.37.0-1766254259-22261
iso_test.go:118:   kicbase_version: v0.0.48-1765966054-22186
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: 764225f3ed0bdecb68079a8ea89e24916c87858e
--- PASS: TestISOImage/VersionJSON (0.17s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.16s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-667849 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
E1221 21:15:04.626387  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/default-k8s-diff-port-665515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/eBPFSupport (0.16s)
E1221 21:15:05.588570  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/default-k8s-diff-port-665515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:15:06.869414  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/default-k8s-diff-port-665515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-340687 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-340687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-340687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-340687 "pgrep -a kubelet"
I1221 21:15:46.952725  126345 config.go:182] Loaded profile config "bridge-340687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-340687 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-nvzqn" [85a2d11f-45c9-456a-9b6f-2d06c2cda5ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-nvzqn" [85a2d11f-45c9-456a-9b6f-2d06c2cda5ee] Running
E1221 21:15:56.996797  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/old-k8s-version-458928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004751275s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-340687 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-340687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-340687 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)
E1221 21:16:18.369217  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/no-preload-419917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:16:24.682267  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/old-k8s-version-458928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:16:26.232602  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/default-k8s-diff-port-665515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:16:27.406611  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/auto-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:16:27.411976  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/auto-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:16:27.422359  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/auto-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:16:27.442697  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/auto-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:16:27.483099  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/auto-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:16:27.563589  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/auto-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:16:27.724084  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/auto-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:16:28.044765  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/auto-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:16:28.685879  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/auto-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:16:29.966446  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/auto-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:16:30.705067  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:16:32.527953  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/auto-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:16:36.729026  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:16:37.648997  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/auto-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:16:46.056439  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/no-preload-419917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:16:47.889431  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/auto-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:17:08.369934  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/auto-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:17:10.073838  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/kindnet-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:17:10.079152  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/kindnet-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:17:10.089432  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/kindnet-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:17:10.109753  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/kindnet-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:17:10.150174  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/kindnet-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:17:10.230563  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/kindnet-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:17:10.391198  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/kindnet-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:17:10.712210  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/kindnet-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:17:11.353187  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/kindnet-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:17:12.633904  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/kindnet-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:17:15.194886  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/kindnet-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:17:20.316035  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/kindnet-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:17:30.556944  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/kindnet-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:17:48.153337  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/default-k8s-diff-port-665515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:17:49.330828  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/auto-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:17:51.037378  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/kindnet-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:17:53.753439  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:10.251709  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/calico-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:10.257038  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/calico-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:10.267334  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/calico-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:10.288057  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/calico-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:10.328426  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/calico-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:10.408793  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/calico-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:10.569429  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/calico-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:10.890260  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/calico-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:11.531228  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/calico-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:12.811771  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/calico-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:15.372240  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/calico-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:20.492977  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/calico-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:30.733733  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/calico-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:31.998102  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/kindnet-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:33.673920  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/addons-659513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:51.214744  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/calico-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:55.240582  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/custom-flannel-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:55.245946  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/custom-flannel-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:55.256291  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/custom-flannel-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:55.276611  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/custom-flannel-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:55.316972  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/custom-flannel-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:55.397399  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/custom-flannel-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:55.557907  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/custom-flannel-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:55.878575  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/custom-flannel-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:56.519632  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/custom-flannel-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:18:57.799848  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/custom-flannel-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:19:00.360593  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/custom-flannel-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:19:05.481218  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/custom-flannel-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:19:07.333803  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-089730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:19:11.252085  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/auto-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:19:15.721642  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/custom-flannel-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:19:25.232312  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/enable-default-cni-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:19:25.237709  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/enable-default-cni-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:19:25.248233  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/enable-default-cni-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:19:25.268592  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/enable-default-cni-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:19:25.309012  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/enable-default-cni-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:19:25.389387  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/enable-default-cni-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:19:25.549917  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/enable-default-cni-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:19:25.870592  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/enable-default-cni-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:19:26.511636  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/enable-default-cni-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:19:27.791942  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/enable-default-cni-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:19:30.352230  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/enable-default-cni-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:19:32.175676  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/calico-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:19:35.472925  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/enable-default-cni-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:19:36.202874  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/custom-flannel-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 21:19:45.714046  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/enable-default-cni-340687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test skip (52/435)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.3/cached-images 0
15 TestDownloadOnly/v1.34.3/binaries 0
16 TestDownloadOnly/v1.34.3/kubectl 0
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.3
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0.01
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService 0.01
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 0.01
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
375 TestStartStop/group/disable-driver-mounts 0.19
379 TestNetworkPlugins/group/kubenet 3.58
387 TestNetworkPlugins/group/cilium 4.07
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-659513 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-920929" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-920929
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
E1221 21:01:13.752566  126345 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/functional-555265/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic.go:615: 
----------------------- debugLogs start: kubenet-340687 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-340687

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-340687

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-340687

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-340687

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-340687

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-340687

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-340687

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-340687

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-340687

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-340687

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-340687

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-340687" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-340687" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22179-122429/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 21 Dec 2025 20:59:53 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.202:8443
name: running-upgrade-787082
contexts:
- context:
cluster: running-upgrade-787082
user: running-upgrade-787082
name: running-upgrade-787082
current-context: ""
kind: Config
users:
- name: running-upgrade-787082
user:
client-certificate: /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/running-upgrade-787082/client.crt
client-key: /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/running-upgrade-787082/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-340687

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-340687"

                                                
                                                
----------------------- debugLogs end: kubenet-340687 [took: 3.414298057s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-340687" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-340687
--- SKIP: TestNetworkPlugins/group/kubenet (3.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-340687 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-340687

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-340687

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-340687

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-340687

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-340687

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-340687

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-340687

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-340687

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-340687

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-340687

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-340687

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-340687" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-340687

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-340687

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-340687

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-340687

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-340687" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-340687" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22179-122429/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 21 Dec 2025 20:59:53 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.202:8443
name: running-upgrade-787082
contexts:
- context:
cluster: running-upgrade-787082
user: running-upgrade-787082
name: running-upgrade-787082
current-context: ""
kind: Config
users:
- name: running-upgrade-787082
user:
client-certificate: /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/running-upgrade-787082/client.crt
client-key: /home/jenkins/minikube-integration/22179-122429/.minikube/profiles/running-upgrade-787082/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-340687

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-340687" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-340687"

                                                
                                                
----------------------- debugLogs end: cilium-340687 [took: 3.865842562s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-340687" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-340687
--- SKIP: TestNetworkPlugins/group/cilium (4.07s)

                                                
                                    
Copied to clipboard